这是本节的多页打印视图。 点击此处打印.

返回本页常规视图.

模板

开箱即用的配置模板,针对具体场景的配置示例,以及配置文件的详细解释。

Pigsty 提供了多种开箱即用的配置模板,适用于不同的使用场景。

您可以在 configure 时使用 -c 指定一个配置模板。如果没有指定配置模板,将会使用默认的 meta 模版。

分类模板
单机模版metarichfatsliminfra
内核模版pgsqlcitusmssqlpolarivorymysqlpgtdeoriolesupabase
高可用模板ha/simuha/fullha/safeha/trioha/dual
应用模版app/odooapp/difyapp/electricapp/maybeapp/teableapp/registry
其他模版demo/eldemo/debiandemo/demodemo/miniobuild/ossbuild/pro

1 - 单机模版

2 - meta

Pigsty 默认使用的配置模板,单节点,覆盖核心功能,标准单机配置,在线安装,本地备份仓库。

meta 配置模板是 Pigsty 默认使用的模板,它的目标是在当前单节点上完成 Pigsty 核心功能 —— PostgreSQL 的部署。

为了实现最好的兼容性,meta 模板仅下载安装包含 最小必需 软件集合,以便在所有操作系统发行版与芯片架构上实现这一目标。


配置概览

  • 配置名称: meta
  • 节点数量: 单节点
  • 配置说明:Pigsty 默认使用的单节点安装配置模板,带有较完善的关键配置参数说明,与最小可用功能集合。
  • 适用系统:el8, el9, el10, d12, d13, u22, u24
  • 适用架构:x86_64, aarch64
  • 相关配置:metaslimfat

使用方式:此配置模板为 Pigsty 默认配置模板,因此在 配置 时无需显式指定 -c meta 参数:

./configure [-i <primary_ip>]

例如,如果您想要安装 PG 17,而非默认的 PostgreSQL 18,可以在 configure 中使用 -v 参数:

./configure -v 17   # or 16,15,14,13....

配置内容

源文件地址:pigsty/conf/meta.yml

---
#==============================================================#
# File      :   meta.yml
# Desc      :   Pigsty default 1-node online install config
# Ctime     :   2020-05-22
# Mtime     :   2025-12-28
# Docs      :   https://doc.pgsty.com/config
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# This is the default 1-node configuration template, with:
# INFRA, NODE, PGSQL, ETCD, MINIO, DOCKER, APP (pgadmin)
# with basic pg extensions: postgis, pgvector
#
# Work with PostgreSQL 14-18 on all supported platform
# Usage:
#   curl https://repo.pigsty.io/get | bash
#   ./configure
#   ./deploy.yml

all:

  #==============================================================#
  # Clusters, Nodes, and Modules
  #==============================================================#
  children:

    #----------------------------------------------#
    # PGSQL : https://doc.pgsty.com/pgsql
    #----------------------------------------------#
    # this is an example single-node postgres cluster with pgvector installed, with one biz database & two biz users
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary } # <---- primary instance with read-write capability
        #x.xx.xx.xx: { pg_seq: 2, pg_role: replica } # <---- read only replica for read-only online traffic
        #x.xx.xx.xy: { pg_seq: 3, pg_role: offline } # <---- offline instance of ETL & interactive queries
      vars:
        pg_cluster: pg-meta

        # install, load, create pg extensions: https://doc.pgsty.com/pgsql/extension
        pg_extensions: [ postgis, pgvector ]

        # define business users/roles : https://doc.pgsty.com/pgsql/user
        pg_users:
          - { name: dbuser_meta ,password: DBUser.Meta   ,pgbouncer: true ,roles: [dbrole_admin   ] ,comment: pigsty admin user }
          - { name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [dbrole_readonly] ,comment: read-only viewer  }

        # define business databases : https://doc.pgsty.com/pgsql/db
        pg_databases:
          - name: meta
            baseline: cmdb.sql
            comment: "pigsty meta database"
            schemas: [pigsty]
            # define extensions in database : https://doc.pgsty.com/pgsql/extension/create
            extensions: [ postgis, vector ]

        # define HBA rules : https://doc.pgsty.com/pgsql/hba
        pg_hba_rules:
          - { user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes' }

        # define backup policies: https://doc.pgsty.com/pgsql/backup
        node_crontab: [ '00 01 * * * postgres /pg/bin/pg-backup full' ] # make a full backup every day 1am

        # define (OPTIONAL) L2 VIP that bind to primary
        #pg_vip_enabled: true
        #pg_vip_address: 10.10.10.2/24
        #pg_vip_interface: eth1


    #----------------------------------------------#
    # INFRA : https://doc.pgsty.com/infra
    #----------------------------------------------#
    infra:
      hosts:
        10.10.10.10: { infra_seq: 1 }
      vars:
        repo_enabled: false   # disable in 1-node mode :  https://doc.pgsty.com/admin/repo
        #repo_extra_packages: [ pg18-main ,pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]

    #----------------------------------------------#
    # ETCD : https://doc.pgsty.com/etcd
    #----------------------------------------------#
    etcd:
      hosts:
        10.10.10.10: { etcd_seq: 1 }
      vars:
        etcd_cluster: etcd
        etcd_safeguard: false             # prevent purging running etcd instance?

    #----------------------------------------------#
    # MINIO : https://doc.pgsty.com/minio
    #----------------------------------------------#
    #minio:
    #  hosts:
    #    10.10.10.10: { minio_seq: 1 }
    #  vars:
    #    minio_cluster: minio
    #    minio_users:                      # list of minio user to be created
    #      - { access_key: pgbackrest  ,secret_key: S3User.Backup ,policy: pgsql }
    #      - { access_key: s3user_meta ,secret_key: S3User.Meta   ,policy: meta  }
    #      - { access_key: s3user_data ,secret_key: S3User.Data   ,policy: data  }

    #----------------------------------------------#
    # DOCKER : https://doc.pgsty.com/docker
    # APP    : https://doc.pgsty.com/app
    #----------------------------------------------#
    # launch example pgadmin app with: ./app.yml (http://10.10.10.10:8885 [email protected] / pigsty)
    app:
      hosts: { 10.10.10.10: {} }
      vars:
        docker_enabled: true                # enabled docker with ./docker.yml
        #docker_registry_mirrors: ["https://docker.1panel.live","https://docker.1ms.run","https://docker.xuanyuan.me","https://registry-1.docker.io"]
        app: pgadmin                        # specify the default app name to be installed (in the apps)
        apps:                               # define all applications, appname: definition
          pgadmin:                          # pgadmin app definition (app/pgadmin -> /opt/pgadmin)
            conf:                           # override /opt/pgadmin/.env
              PGADMIN_DEFAULT_EMAIL: [email protected]
              PGADMIN_DEFAULT_PASSWORD: pigsty


  #==============================================================#
  # Global Parameters
  #==============================================================#
  vars:

    #----------------------------------------------#
    # INFRA : https://doc.pgsty.com/infra
    #----------------------------------------------#
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: china                     # upstream mirror region: default|china|europe
    proxy_env:                        # global proxy env when downloading packages
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.myqcloud.com,*.tsinghua.edu.cn"
      # http_proxy:  # set your proxy here: e.g http://user:[email protected]
      # https_proxy: # set your proxy here: e.g http://user:[email protected]
      # all_proxy:   # set your proxy here: e.g http://user:[email protected]
    infra_portal:                     # infra services exposed via portal
      home : { domain: i.pigsty }     # default domain name
      pgadmin : { domain: adm.pigsty ,endpoint: "${admin_ip}:8885" }
      #minio  : { domain: m.pigsty ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }

    #----------------------------------------------#
    # NODE : https://doc.pgsty.com/node/param
    #----------------------------------------------#
    nodename_overwrite: false             # do not overwrite node hostname on single node mode
    node_tune: oltp                       # node tuning specs: oltp,olap,tiny,crit
    node_etc_hosts: [ '${admin_ip} i.pigsty sss.pigsty' ]
    node_repo_modules: 'node,infra,pgsql' # add these repos directly to the singleton node
    #node_repo_modules: local             # use this if you want to build & user local repo
    node_repo_remove: true                # remove existing node repo for node managed by pigsty
    #node_packages: [openssh-server]      # packages to be installed current nodes with the latest version

    #----------------------------------------------#
    # PGSQL : https://doc.pgsty.com/pgsql/param
    #----------------------------------------------#
    pg_version: 18                      # default postgres version
    pg_conf: oltp.yml                   # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    pg_safeguard: false                 # prevent purging running postgres instance?
    pg_packages: [ pgsql-main, pgsql-common ]                 # pg kernel and common utils
    #pg_extensions: [ pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]

    #----------------------------------------------#
    # BACKUP : https://doc.pgsty.com/pgsql/backup
    #----------------------------------------------#
    # if you want to use minio as backup repo instead of 'local' fs, uncomment this, and configure `pgbackrest_repo`
    # you can also use external object storage as backup repo
    #pgbackrest_method: minio          # if you want to use minio as backup repo instead of 'local' fs, uncomment this
    #pgbackrest_repo:                  # pgbackrest repo: https://pgbackrest.org/configuration.html#section-repository
    #  local:                          # default pgbackrest repo with local posix fs
    #    path: /pg/backup              # local backup directory, `/pg/backup` by default
    #    retention_full_type: count    # retention full backups by count
    #    retention_full: 2             # keep 2, at most 3 full backup when using local fs repo
    #  minio:                          # optional minio repo for pgbackrest
    #    type: s3                      # minio is s3-compatible, so s3 is used
    #    s3_endpoint: sss.pigsty       # minio endpoint domain name, `sss.pigsty` by default
    #    s3_region: us-east-1          # minio region, us-east-1 by default, useless for minio
    #    s3_bucket: pgsql              # minio bucket name, `pgsql` by default
    #    s3_key: pgbackrest            # minio user access key for pgbackrest
    #    s3_key_secret: S3User.Backup  # minio user secret key for pgbackrest
    #    s3_uri_style: path            # use path style uri for minio rather than host style
    #    path: /pgbackrest             # minio backup path, default is `/pgbackrest`
    #    storage_port: 9000            # minio port, 9000 by default
    #    storage_ca_file: /etc/pki/ca.crt  # minio ca file path, `/etc/pki/ca.crt` by default
    #    block: y                      # Enable block incremental backup
    #    bundle: y                     # bundle small files into a single file
    #    bundle_limit: 20MiB           # Limit for file bundles, 20MiB for object storage
    #    bundle_size: 128MiB           # Target size for file bundles, 128MiB for object storage
    #    cipher_type: aes-256-cbc      # enable AES encryption for remote backup repo
    #    cipher_pass: pgBackRest       # AES encryption password, default is 'pgBackRest'
    #    retention_full_type: time     # retention full backup by time on minio repo
    #    retention_full: 14            # keep full backup for last 14 days
    #  s3: # aliyun oss (s3 compatible) object storage service
    #    type: s3                      # oss is s3-compatible
    #    s3_endpoint: oss-cn-beijing-internal.aliyuncs.com
    #    s3_region: oss-cn-beijing
    #    s3_bucket: <your_bucket_name>
    #    s3_key: <your_access_key>
    #    s3_key_secret: <your_secret_key>
    #    s3_uri_style: host
    #    path: /pgbackrest
    #    bundle: y                     # bundle small files into a single file
    #    bundle_limit: 20MiB           # Limit for file bundles, 20MiB for object storage
    #    bundle_size: 128MiB           # Target size for file bundles, 128MiB for object storage
    #    cipher_type: aes-256-cbc      # enable AES encryption for remote backup repo
    #    cipher_pass: pgBackRest       # AES encryption password, default is 'pgBackRest'
    #    retention_full_type: time     # retention full backup by time on minio repo
    #    retention_full: 14            # keep full backup for last 14 days

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

配置解读

meta 模板是 Pigsty 的 默认入门配置,专为快速上手设计。

适用场景

  • 首次体验 Pigsty 的用户
  • 开发测试环境的快速部署
  • 单机运行的小型生产环境
  • 作为更复杂部署的基础模板

关键特性

  • 在线安装模式,不构建本地软件源(repo_enabled: false
  • 默认安装 PostgreSQL 18,带有 postgispgvector 扩展
  • 包含完整的监控基础设施(Grafana、Prometheus、Loki 等)
  • 预置 Docker 与 pgAdmin 应用示例
  • MinIO 备份存储默认禁用,可按需启用

注意事项

  • 默认密码为示例密码,生产环境 务必修改
  • 单节点模式的 etcd 无高可用保障,适合开发测试
  • 如需构建本地软件源,请使用 rich 模板

3 - rich

功能丰富的单节点配置,构建本地软件源,下载所有扩展,启用 MinIO 备份,预置完整示例

配置模板 richmeta 的增强版本,专为需要完整功能体验的用户设计。

如果您希望构建本地软件源、使用 MinIO 存储备份、运行 Docker 应用,或需要预置业务数据库,可以使用此模板。


配置概览

  • 配置名称: rich
  • 节点数量: 单节点
  • 配置说明:功能丰富的单节点配置,在 meta 基础上增加本地软件源、MinIO 备份、完整扩展、Docker 应用示例
  • 适用系统:el8, el9, el10, d12, d13, u22, u24
  • 适用架构:x86_64, aarch64
  • 相关配置:metaslimfat

此模板相比 meta 的主要增强:

  • 构建本地软件源(repo_enabled: true),下载所有 PG 扩展
  • 启用单节点 MinIO 作为 PostgreSQL 备份存储
  • 预置 TimescaleDB、pgvector、pg_wait_sampling 等扩展
  • 包含详细的用户/数据库/服务定义注释示例
  • 添加 Redis 主从实例示例
  • 预置 pg-test 三节点高可用集群配置存根

启用方式:

./configure -c rich [-i <primary_ip>]

配置内容

源文件地址:pigsty/conf/rich.yml

---
#==============================================================#
# File      :   rich.yml
# Desc      :   Pigsty feature-rich 1-node online install config
# Ctime     :   2020-05-22
# Mtime     :   2025-12-12
# Docs      :   https://doc.pgsty.com/config
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# This is the enhanced version of default meta.yml, which has:
# - almost all available postgres extensions
# - build local software repo for entire env
# - 1 node minio used as central backup repo
# - cluster stub for 3-node pg-test / ferret / redis
# - stub for nginx, certs, and website self-hosting config
# - detailed comments for database / user / service
#
# Usage:
#   curl https://repo.pigsty.io/get | bash
#   ./configure -c rich
#   ./deploy.yml

all:

  #==============================================================#
  # Clusters, Nodes, and Modules
  #==============================================================#
  children:

    #----------------------------------------------#
    # PGSQL : https://doc.pgsty.com/pgsql
    #----------------------------------------------#
    # this is an example single-node postgres cluster with pgvector installed, with one biz database & two biz users
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary } # <---- primary instance with read-write capability
        #x.xx.xx.xx: { pg_seq: 2, pg_role: replica } # <---- read only replica for read-only online traffic
        #x.xx.xx.xy: { pg_seq: 3, pg_role: offline } # <---- offline instance of ETL & interactive queries
      vars:
        pg_cluster: pg-meta

        # install, load, create pg extensions: https://doc.pgsty.com/pgsql/extension
        pg_extensions: [ postgis, timescaledb, pgvector, pg_wait_sampling ]
        pg_libs: 'timescaledb, pg_stat_statements, auto_explain, pg_wait_sampling'

        # define business users/roles : https://doc.pgsty.com/pgsql/user
        pg_users:
          - name: dbuser_meta               # REQUIRED, `name` is the only mandatory field of a user definition
            password: DBUser.Meta           # optional, the password. can be a scram-sha-256 hash string or plain text
            #state: create                   # optional, create|absent, 'create' by default, use 'absent' to drop user
            #login: true                     # optional, can log in, true by default (new biz ROLE should be false)
            #superuser: false                # optional, is superuser? false by default
            #createdb: false                 # optional, can create databases? false by default
            #createrole: false               # optional, can create role? false by default
            #inherit: true                   # optional, can this role use inherited privileges? true by default
            #replication: false              # optional, can this role do replication? false by default
            #bypassrls: false                # optional, can this role bypass row level security? false by default
            #pgbouncer: true                 # optional, add this user to the pgbouncer user-list? false by default (production user should be true explicitly)
            #connlimit: -1                   # optional, user connection limit, default -1 disable limit
            #expire_in: 3650                 # optional, now + n days when this role is expired (OVERWRITE expire_at)
            #expire_at: '2030-12-31'         # optional, YYYY-MM-DD 'timestamp' when this role is expired (OVERWRITTEN by expire_in)
            #comment: pigsty admin user      # optional, comment string for this user/role
            #roles: [dbrole_admin]           # optional, belonged roles. default roles are: dbrole_{admin|readonly|readwrite|offline}
            #parameters: {}                  # optional, role level parameters with `ALTER ROLE SET`
            #pool_mode: transaction          # optional, pgbouncer pool mode at user level, transaction by default
            #pool_connlimit: -1              # optional, max database connections at user level, default -1 disable limit
            # Enhanced roles syntax (PG16+): roles can be string or object with options:
            #   - dbrole_readwrite                       # simple string: GRANT role
            #   - { name: role, admin: true }            # GRANT WITH ADMIN OPTION
            #   - { name: role, set: false }             # PG16: REVOKE SET OPTION
            #   - { name: role, inherit: false }         # PG16: REVOKE INHERIT OPTION
            #   - { name: role, state: absent }          # REVOKE membership
          - { name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [dbrole_readonly], comment: read-only viewer for meta database }
          #- {name: dbuser_bytebase ,password: DBUser.Bytebase ,pgbouncer: true ,roles: [dbrole_admin] ,comment: admin user for bytebase database   }
          #- {name: dbuser_remove ,state: absent }       # use state: absent to remove a user

        # define business databases : https://doc.pgsty.com/pgsql/db
        pg_databases:                       # define business databases on this cluster, array of database definition
          - name: meta                      # REQUIRED, `name` is the only mandatory field of a database definition
            #state: create                  # optional, create|absent|recreate, create by default
            baseline: cmdb.sql              # optional, database sql baseline path, (relative path among the ansible search path, e.g.: files/)
            schemas: [ pigsty ]             # optional, additional schemas to be created, array of schema names
            extensions:                     # optional, additional extensions to be installed: array of `{name[,schema]}`
              - vector                      # install pgvector for vector similarity search
              - postgis                     # install postgis for geospatial type & index
              - timescaledb                 # install timescaledb for time-series data
              - { name: pg_wait_sampling, schema: monitor } # install pg_wait_sampling on monitor schema
            comment: pigsty meta database   # optional, comment string for this database
            #pgbouncer: true                # optional, add this database to the pgbouncer database list? true by default
            #owner: postgres                # optional, database owner, current user if not specified
            #template: template1            # optional, which template to use, template1 by default
            #strategy: FILE_COPY            # optional, clone strategy: FILE_COPY or WAL_LOG (PG15+), default to PG's default
            #encoding: UTF8                 # optional, inherited from template / cluster if not defined (UTF8)
            #locale: C                      # optional, inherited from template / cluster if not defined (C)
            #lc_collate: C                  # optional, inherited from template / cluster if not defined (C)
            #lc_ctype: C                    # optional, inherited from template / cluster if not defined (C)
            #locale_provider: libc          # optional, locale provider: libc, icu, builtin (PG15+)
            #icu_locale: en-US              # optional, icu locale for icu locale provider (PG15+)
            #icu_rules: ''                  # optional, icu rules for icu locale provider (PG16+)
            #builtin_locale: C.UTF-8        # optional, builtin locale for builtin locale provider (PG17+)
            #tablespace: pg_default         # optional, default tablespace, pg_default by default
            #is_template: false             # optional, mark database as template, allowing clone by any user with CREATEDB privilege
            #allowconn: true                # optional, allow connection, true by default. false will disable connect at all
            #revokeconn: false              # optional, revoke public connection privilege. false by default. (leave connect with grant option to owner)
            #register_datasource: true      # optional, register this database to grafana datasources? true by default
            #connlimit: -1                  # optional, database connection limit, default -1 disable limit
            #pool_auth_user: dbuser_meta    # optional, all connection to this pgbouncer database will be authenticated by this user
            #pool_mode: transaction         # optional, pgbouncer pool mode at database level, default transaction
            #pool_size: 64                  # optional, pgbouncer pool size at database level, default 64
            #pool_size_reserve: 32          # optional, pgbouncer pool size reserve at database level, default 32
            #pool_size_min: 0               # optional, pgbouncer pool size min at database level, default 0
            #pool_max_db_conn: 100          # optional, max database connections at database level, default 100
          #- {name: bytebase ,owner: dbuser_bytebase ,revokeconn: true ,comment: bytebase primary database }

        # define HBA rules : https://doc.pgsty.com/pgsql/hba
        pg_hba_rules:
          - { user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes' }

        # define backup policies: https://doc.pgsty.com/pgsql/backup
        node_crontab: [ '00 01 * * * postgres /pg/bin/pg-backup full' ] # make a full backup every day 1am

        # define (OPTIONAL) L2 VIP that bind to primary
        #pg_vip_enabled: true
        #pg_vip_address: 10.10.10.2/24
        #pg_vip_interface: eth1

    #----------------------------------------------#
    # PGSQL HA Cluster Example: 3-node pg-test
    #----------------------------------------------#
    #pg-test:
    #  hosts:
    #    10.10.10.11: { pg_seq: 1, pg_role: primary }   # primary instance, leader of cluster
    #    10.10.10.12: { pg_seq: 2, pg_role: replica }   # replica instance, follower of leader
    #    10.10.10.13: { pg_seq: 3, pg_role: replica, pg_offline_query: true } # replica with offline access
    #  vars:
    #    pg_cluster: pg-test           # define pgsql cluster name
    #    pg_users:  [{ name: test , password: test , pgbouncer: true , roles: [ dbrole_admin ] }]
    #    pg_databases: [{ name: test }]
    #    # define business service here: https://doc.pgsty.com/pgsql/service
    #    pg_services:                        # extra services in addition to pg_default_services, array of service definition
    #      # standby service will route {ip|name}:5435 to sync replica's pgbouncer (5435->6432 standby)
    #      - name: standby                   # required, service name, the actual svc name will be prefixed with `pg_cluster`, e.g: pg-meta-standby
    #        port: 5435                      # required, service exposed port (work as kubernetes service node port mode)
    #        ip: "*"                         # optional, service bind ip address, `*` for all ip by default
    #        selector: "[]"                  # required, service member selector, use JMESPath to filter inventory
    #        dest: default                   # optional, destination port, default|postgres|pgbouncer|<port_number>, 'default' by default
    #        check: /sync                    # optional, health check url path, / by default
    #        backup: "[? pg_role == `primary`]"  # backup server selector
    #        maxconn: 3000                   # optional, max allowed front-end connection
    #        balance: roundrobin             # optional, haproxy load balance algorithm (roundrobin by default, other: leastconn)
    #        options: 'inter 3s fastinter 1s downinter 5s rise 3 fall 3 on-marked-down shutdown-sessions slowstart 30s maxconn 3000 maxqueue 128 weight 100'
    #    pg_vip_enabled: true
    #    pg_vip_address: 10.10.10.3/24
    #    pg_vip_interface: eth1
    #    node_crontab:  # make a full backup on monday 1am, and an incremental backup during weekdays
    #      - '00 01 * * 1 postgres /pg/bin/pg-backup full'
    #      - '00 01 * * 2,3,4,5,6,7 postgres /pg/bin/pg-backup'

    #----------------------------------------------#
    # INFRA : https://doc.pgsty.com/infra
    #----------------------------------------------#
    infra:
      hosts:
        10.10.10.10: { infra_seq: 1 }
      vars:
        repo_enabled: true    # build local repo, and install everything from it:  https://doc.pgsty.com/admin/repo
        # and download all extensions into local repo
        repo_extra_packages: [ pg18-main ,pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]

    #----------------------------------------------#
    # ETCD : https://doc.pgsty.com/etcd
    #----------------------------------------------#
    etcd:
      hosts:
        10.10.10.10: { etcd_seq: 1 }
      vars:
        etcd_cluster: etcd
        etcd_safeguard: false             # prevent purging running etcd instance?

    #----------------------------------------------#
    # MINIO : https://doc.pgsty.com/minio
    #----------------------------------------------#
    minio:
      hosts:
        10.10.10.10: { minio_seq: 1 }
      vars:
        minio_cluster: minio
        minio_users:                      # list of minio user to be created
          - { access_key: pgbackrest  ,secret_key: S3User.Backup ,policy: pgsql }
          - { access_key: s3user_meta ,secret_key: S3User.Meta   ,policy: meta  }
          - { access_key: s3user_data ,secret_key: S3User.Data   ,policy: data  }

    #----------------------------------------------#
    # DOCKER : https://doc.pgsty.com/docker
    # APP    : https://doc.pgsty.com/app
    #----------------------------------------------#
    # OPTIONAL, launch example pgadmin app with: ./app.yml & ./app.yml -e app=bytebase
    app:
      hosts: { 10.10.10.10: {} }
      vars:
        docker_enabled: true                # enabled docker with ./docker.yml
        #docker_registry_mirrors: ["https://docker.1panel.live","https://docker.1ms.run","https://docker.xuanyuan.me","https://registry-1.docker.io"]
        app: pgadmin                        # specify the default app name to be installed (in the apps)
        apps:                               # define all applications, appname: definition

          # Admin GUI for PostgreSQL, launch with: ./app.yml
          pgadmin:                          # pgadmin app definition (app/pgadmin -> /opt/pgadmin)
            conf:                           # override /opt/pgadmin/.env
              PGADMIN_DEFAULT_EMAIL: [email protected]   # default user name
              PGADMIN_DEFAULT_PASSWORD: pigsty         # default password

          # Schema Migration GUI for PostgreSQL, launch with: ./app.yml -e app=bytebase
          bytebase:
            conf:
              BB_DOMAIN: http://ddl.pigsty  # replace it with your public domain name and postgres database url
              BB_PGURL: "postgresql://dbuser_bytebase:[email protected]:5432/bytebase?sslmode=prefer"

    #----------------------------------------------#
    # REDIS : https://doc.pgsty.com/redis
    #----------------------------------------------#
    # OPTIONAL, launch redis clusters with: ./redis.yml
    redis-ms:
      hosts: { 10.10.10.10: { redis_node: 1 , redis_instances: { 6379: { }, 6380: { replica_of: '10.10.10.10 6379' } } } }
      vars: { redis_cluster: redis-ms ,redis_password: 'redis.ms' ,redis_max_memory: 64MB }



  #==============================================================#
  # Global Parameters
  #==============================================================#
  vars:

    #----------------------------------------------#
    # INFRA : https://doc.pgsty.com/infra
    #----------------------------------------------#
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default|china|europe
    proxy_env:                        # global proxy env when downloading packages
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.myqcloud.com,*.tsinghua.edu.cn"
      # http_proxy:  # set your proxy here: e.g http://user:[email protected]
      # https_proxy: # set your proxy here: e.g http://user:[email protected]
      # all_proxy:   # set your proxy here: e.g http://user:[email protected]

    certbot_sign: false               # enable certbot to sign https certificate for infra portal
    certbot_email: [email protected]     # replace your email address to receive expiration notice
    infra_portal:                     # infra services exposed via portal
      home      : { domain: i.pigsty }     # default domain name
      pgadmin   : { domain: adm.pigsty ,endpoint: "${admin_ip}:8885" }
      bytebase  : { domain: ddl.pigsty ,endpoint: "${admin_ip}:8887" }
      minio     : { domain: m.pigsty ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }

      #website:   # static local website example stub
      #  domain: repo.pigsty              # external domain name for static site
      #  certbot: repo.pigsty             # use certbot to sign https certificate for this static site
      #  path: /www/pigsty                # path to the static site directory

      #supabase:  # dynamic upstream service example stub
      #  domain: supa.pigsty          # external domain name for upstream service
      #  certbot: supa.pigsty         # use certbot to sign https certificate for this upstream server
      #  endpoint: "10.10.10.10:8000" # path to the static site directory
      #  websocket: true              # add websocket support
      #  certbot: supa.pigsty         # certbot cert name, apply with `make cert`

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root

    #----------------------------------------------#
    # NODE : https://doc.pgsty.com/node/param
    #----------------------------------------------#
    nodename_overwrite: false             # do not overwrite node hostname on single node mode
    node_tune: oltp                       # node tuning specs: oltp,olap,tiny,crit
    node_etc_hosts:                       # add static domains to all nodes /etc/hosts
      - '${admin_ip} i.pigsty sss.pigsty'
      - '${admin_ip} adm.pigsty ddl.pigsty repo.pigsty supa.pigsty'
    node_repo_modules: local              # use pre-made local repo rather than install from upstream
    node_repo_remove: true                # remove existing node repo for node managed by pigsty
    #node_packages: [openssh-server]      # packages to be installed current nodes with latest version
    #node_timezone: Asia/Hong_Kong        # overwrite node timezone

    #----------------------------------------------#
    # PGSQL : https://doc.pgsty.com/pgsql/param
    #----------------------------------------------#
    pg_version: 18                      # default postgres version
    pg_conf: oltp.yml                   # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    pg_safeguard: false                 # prevent purging running postgres instance?
    pg_packages: [ pgsql-main, pgsql-common ]                 # pg kernel and common utils
    #pg_extensions: [ pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]

    #----------------------------------------------#
    # BACKUP : https://doc.pgsty.com/pgsql/backup
    #----------------------------------------------#
    # if you want to use minio as backup repo instead of 'local' fs, uncomment this, and configure `pgbackrest_repo`
    # you can also use external object storage as backup repo
    pgbackrest_method: minio          # if you want to use minio as backup repo instead of 'local' fs, uncomment this
    pgbackrest_repo:                  # pgbackrest repo: https://pgbackrest.org/configuration.html#section-repository
      local:                          # default pgbackrest repo with local posix fs
        path: /pg/backup              # local backup directory, `/pg/backup` by default
        retention_full_type: count    # retention full backups by count
        retention_full: 2             # keep 2, at most 3 full backups when using local fs repo
      minio:                          # optional minio repo for pgbackrest
        type: s3                      # minio is s3-compatible, so s3 is used
        s3_endpoint: sss.pigsty       # minio endpoint domain name, `sss.pigsty` by default
        s3_region: us-east-1          # minio region, us-east-1 by default, useless for minio
        s3_bucket: pgsql              # minio bucket name, `pgsql` by default
        s3_key: pgbackrest            # minio user access key for pgbackrest [CHANGE ACCORDING to minio_users.pgbackrest]
        s3_key_secret: S3User.Backup  # minio user secret key for pgbackrest [CHANGE ACCORDING to minio_users.pgbackrest]
        s3_uri_style: path            # use path style uri for minio rather than host style
        path: /pgbackrest             # minio backup path, default is `/pgbackrest`
        storage_port: 9000            # minio port, 9000 by default
        storage_ca_file: /etc/pki/ca.crt  # minio ca file path, `/etc/pki/ca.crt` by default
        block: y                      # Enable block incremental backup
        bundle: y                     # bundle small files into a single file
        bundle_limit: 20MiB           # Limit for file bundles, 20MiB for object storage
        bundle_size: 128MiB           # Target size for file bundles, 128MiB for object storage
        cipher_type: aes-256-cbc      # enable AES encryption for remote backup repo
        cipher_pass: pgBackRest       # AES encryption password, default is 'pgBackRest'
        retention_full_type: time     # retention full backup by time on minio repo
        retention_full: 14            # keep full backup for the last 14 days
      s3:                             # you can use cloud object storage as backup repo
        type: s3                      # Add your object storage credentials here!
        s3_endpoint: oss-cn-beijing-internal.aliyuncs.com
        s3_region: oss-cn-beijing
        s3_bucket: <your_bucket_name>
        s3_key: <your_access_key>
        s3_key_secret: <your_secret_key>
        s3_uri_style: host
        path: /pgbackrest
        bundle: y                     # bundle small files into a single file
        bundle_limit: 20MiB           # Limit for file bundles, 20MiB for object storage
        bundle_size: 128MiB           # Target size for file bundles, 128MiB for object storage
        cipher_type: aes-256-cbc      # enable AES encryption for remote backup repo
        cipher_pass: pgBackRest       # AES encryption password, default is 'pgBackRest'
        retention_full_type: time     # retention full backup by time on minio repo
        retention_full: 14            # keep full backup for the last 14 days
...

配置解读

rich 模板是 Pigsty 的 完整功能展示配置,适合需要深入体验所有功能的用户。

适用场景

  • 需要构建本地软件源的离线环境
  • 需要使用 MinIO 作为 PostgreSQL 备份存储
  • 需要预先规划多个业务数据库和用户
  • 需要运行 Docker 应用(pgAdmin、Bytebase 等)
  • 希望了解配置参数完整用法的学习者

与 meta 的主要区别

  • 启用本地软件源构建(repo_enabled: true
  • 启用 MinIO 存储备份(pgbackrest_method: minio
  • 预装 TimescaleDB、pg_wait_sampling 等额外扩展
  • 包含详细的参数注释,便于理解配置含义
  • 预置高可用集群存根配置(pg-test)

注意事项

  • ARM64 架构部分扩展不可用,请按需调整
  • 构建本地软件源需要较长时间和较大磁盘空间
  • 默认密码为示例密码,生产环境务必修改

4 - slim

精简安装配置模板,不部署监控基础设施,直接从互联网安装 PostgreSQL

slim 配置模板提供 精简安装 能力,在不部署 Infra 监控基础设施的前提下,直接从互联网安装 PostgreSQL 高可用集群。

当您只需要一个可用的数据库实例,不需要监控系统时,可以考虑使用 精简安装 模式。


配置概览

  • 配置名称: slim
  • 节点数量: 单节点
  • 配置说明:精简安装配置模板,不部署监控基础设施,直接安装 PostgreSQL
  • 适用系统:el8, el9, el10, d12, d13, u22, u24
  • 适用架构:x86_64, aarch64
  • 相关配置:meta

启用方式:

./configure -c slim [-i <primary_ip>]
./slim.yml   # 执行精简安装

配置内容

源文件地址:pigsty/conf/slim.yml

---
#==============================================================#
# File      :   slim.yml
# Desc      :   Pigsty slim installation config template
# Ctime     :   2020-05-22
# Mtime     :   2025-12-28
# Docs      :   https://doc.pgsty.com/config
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# This is the config template for slim / minimal installation
# No monitoring & infra will be installed, just raw postgresql
#
# Usage:
#   curl https://repo.pigsty.io/get | bash
#   ./configure -c slim
#   ./slim.yml

all:
  children:

    etcd: # dcs service for postgres/patroni ha consensus
      hosts: # 1 node for testing, 3 or 5 for production
        10.10.10.10: { etcd_seq: 1 }  # etcd_seq required
        #10.10.10.11: { etcd_seq: 2 }  # assign from 1 ~ n
        #10.10.10.12: { etcd_seq: 3 }  # odd number please
      vars: # cluster level parameter override roles/etcd
        etcd_cluster: etcd  # mark etcd cluster name etcd

    #----------------------------------------------#
    # PostgreSQL Cluster
    #----------------------------------------------#
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary }
        #10.10.10.11: { pg_seq: 2, pg_role: replica } # you can add more!
        #10.10.10.12: { pg_seq: 3, pg_role: replica, pg_offline_query: true }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - { name: dbuser_meta ,password: DBUser.Meta   ,pgbouncer: true ,roles: [dbrole_admin   ] ,comment: pigsty admin user }
          - { name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [dbrole_readonly] ,comment: read-only viewer  }
        pg_databases:
          - { name: meta, baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [pigsty] ,extensions: [ vector ]}
        node_crontab: [ '00 01 * * * postgres /pg/bin/pg-backup full' ] # make a full backup every 1am

  vars:
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default,china,europe
    nodename_overwrite: false           # do not overwrite node hostname on single node mode
    node_repo_modules: node,infra,pgsql # add these repos directly to the singleton node
    node_tune: oltp                     # node tuning specs: oltp,olap,tiny,crit
    pg_conf: oltp.yml                   # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    pg_version: 18                      # Default PostgreSQL Major Version is 18
    pg_packages: [ pgsql-main, pgsql-common ]   # pg kernel and common utils
    #pg_extensions: [ pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

配置解读

slim 模板是 Pigsty 的 精简安装配置,专为快速部署裸 PostgreSQL 集群设计。

适用场景

  • 仅需要 PostgreSQL 数据库,不需要监控系统
  • 资源有限的小型服务器或边缘设备
  • 快速部署测试用的临时数据库
  • 已有监控系统,只需要 PostgreSQL 高可用集群

关键特性

  • 使用 slim.yml 剧本而非 deploy.yml 进行安装
  • 从互联网直接安装软件,不构建本地软件源
  • 保留核心 PostgreSQL 高可用能力(Patroni + etcd + HAProxy)
  • 最小化软件包下载,加快安装速度
  • 默认使用 PostgreSQL 18

与 meta 的区别

  • slim 使用专用的 slim.yml 剧本,跳过 Infra 模块安装
  • 安装速度更快,资源占用更少
  • 适合"只要数据库"的场景

注意事项

  • 精简安装后无法通过 Grafana 查看数据库状态
  • 如需监控功能,请使用 metarich 模板
  • 可按需添加从库实现高可用

5 - fat

功能全测试模板,单节点安装所有扩展,构建包含 PG 13-18 全版本的本地软件源。

fat 配置模板是 Pigsty 的 功能全测试模板(Feature-All-Test),在单节点上安装所有扩展插件,并构建包含 PostgreSQL 13-18 全部六个大版本所有扩展的本地软件源。

这是一个用于测试与开发的全功能配置,适合需要完整软件包缓存或测试全部扩展的场景。


配置概览

  • 配置名称: fat
  • 节点数量: 单节点
  • 配置说明:功能全测试模板,安装所有扩展,构建包含 PG 13-18 全版本的本地软件源
  • 适用系统:el8, el9, el10, d12, d13, u22, u24
  • 适用架构:x86_64, aarch64
  • 相关配置:metaslimfat

启用方式:

./configure -c fat [-i <primary_ip>]

如需指定特定 PostgreSQL 版本:

./configure -c fat -v 17   # 使用 PostgreSQL 17

配置内容

源文件地址:pigsty/conf/fat.yml

---
#==============================================================#
# File      :   fat.yml
# Desc      :   Pigsty Feature-All-Test config template
# Ctime     :   2020-05-22
# Mtime     :   2025-12-28
# Docs      :   https://doc.pgsty.com/config
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# This is the 4-node sandbox for pigsty
#
# Usage:
#   curl https://repo.pigsty.io/get | bash
#   ./configure -c fat [-v 18|17|16|15]
#   ./deploy.yml

all:

  #==============================================================#
  # Clusters, Nodes, and Modules
  #==============================================================#
  children:

    #----------------------------------------------#
    # PGSQL : https://doc.pgsty.com/pgsql
    #----------------------------------------------#
    # this is an example single-node postgres cluster with pgvector installed, with one biz database & two biz users
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary } # <---- primary instance with read-write capability
        #x.xx.xx.xx: { pg_seq: 2, pg_role: replica } # <---- read only replica for read-only online traffic
        #x.xx.xx.xy: { pg_seq: 3, pg_role: offline } # <---- offline instance of ETL & interactive queries
      vars:
        pg_cluster: pg-meta

        # install, load, create pg extensions: https://doc.pgsty.com/pgsql/extension
        pg_extensions: [ pg18-main ,pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]
        pg_libs: 'timescaledb, pg_stat_statements, auto_explain, pg_wait_sampling'

        # define business users/roles : https://doc.pgsty.com/pgsql/user
        pg_users:
          - name: dbuser_meta               # REQUIRED, `name` is the only mandatory field of a user definition
            password: DBUser.Meta           # optional, the password. can be a scram-sha-256 hash string or plain text
            #state: create                   # optional, create|absent, 'create' by default, use 'absent' to drop user
            #login: true                     # optional, can log in, true by default (new biz ROLE should be false)
            #superuser: false                # optional, is superuser? false by default
            #createdb: false                 # optional, can create databases? false by default
            #createrole: false               # optional, can create role? false by default
            #inherit: true                   # optional, can this role use inherited privileges? true by default
            #replication: false              # optional, can this role do replication? false by default
            #bypassrls: false                # optional, can this role bypass row level security? false by default
            #pgbouncer: true                 # optional, add this user to the pgbouncer user-list? false by default (production user should be true explicitly)
            #connlimit: -1                   # optional, user connection limit, default -1 disable limit
            #expire_in: 3650                 # optional, now + n days when this role is expired (OVERWRITE expire_at)
            #expire_at: '2030-12-31'         # optional, YYYY-MM-DD 'timestamp' when this role is expired (OVERWRITTEN by expire_in)
            #comment: pigsty admin user      # optional, comment string for this user/role
            #roles: [dbrole_admin]           # optional, belonged roles. default roles are: dbrole_{admin|readonly|readwrite|offline}
            #parameters: {}                  # optional, role level parameters with `ALTER ROLE SET`
            #pool_mode: transaction          # optional, pgbouncer pool mode at user level, transaction by default
            #pool_connlimit: -1              # optional, max database connections at user level, default -1 disable limit
            # Enhanced roles syntax (PG16+): roles can be string or object with options:
            #   - dbrole_readwrite                       # simple string: GRANT role
            #   - { name: role, admin: true }            # GRANT WITH ADMIN OPTION
            #   - { name: role, set: false }             # PG16: REVOKE SET OPTION
            #   - { name: role, inherit: false }         # PG16: REVOKE INHERIT OPTION
            #   - { name: role, state: absent }          # REVOKE membership
          - { name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [dbrole_readonly], comment: read-only viewer for meta database }
          #- {name: dbuser_bytebase ,password: DBUser.Bytebase ,pgbouncer: true ,roles: [dbrole_admin] ,comment: admin user for bytebase database   }
          #- {name: dbuser_remove ,state: absent }       # use state: absent to remove a user

        # define business databases : https://doc.pgsty.com/pgsql/db
        pg_databases:                       # define business databases on this cluster, array of database definition
          - name: meta                      # REQUIRED, `name` is the only mandatory field of a database definition
            #state: create                  # optional, create|absent|recreate, create by default
            baseline: cmdb.sql              # optional, database sql baseline path, (relative path among the ansible search path, e.g.: files/)
            schemas: [ pigsty ]             # optional, additional schemas to be created, array of schema names
            extensions:                     # optional, additional extensions to be installed: array of `{name[,schema]}`
              - vector                      # install pgvector for vector similarity search
              - postgis                     # install postgis for geospatial type & index
              - timescaledb                 # install timescaledb for time-series data
              - { name: pg_wait_sampling, schema: monitor } # install pg_wait_sampling on monitor schema
            comment: pigsty meta database   # optional, comment string for this database
            #pgbouncer: true                # optional, add this database to the pgbouncer database list? true by default
            #owner: postgres                # optional, database owner, current user if not specified
            #template: template1            # optional, which template to use, template1 by default
            #strategy: FILE_COPY            # optional, clone strategy: FILE_COPY or WAL_LOG (PG15+), default to PG's default
            #encoding: UTF8                 # optional, inherited from template / cluster if not defined (UTF8)
            #locale: C                      # optional, inherited from template / cluster if not defined (C)
            #lc_collate: C                  # optional, inherited from template / cluster if not defined (C)
            #lc_ctype: C                    # optional, inherited from template / cluster if not defined (C)
            #locale_provider: libc          # optional, locale provider: libc, icu, builtin (PG15+)
            #icu_locale: en-US              # optional, icu locale for icu locale provider (PG15+)
            #icu_rules: ''                  # optional, icu rules for icu locale provider (PG16+)
            #builtin_locale: C.UTF-8        # optional, builtin locale for builtin locale provider (PG17+)
            #tablespace: pg_default         # optional, default tablespace, pg_default by default
            #is_template: false             # optional, mark database as template, allowing clone by any user with CREATEDB privilege
            #allowconn: true                # optional, allow connection, true by default. false will disable connect at all
            #revokeconn: false              # optional, revoke public connection privilege. false by default. (leave connect with grant option to owner)
            #register_datasource: true      # optional, register this database to grafana datasources? true by default
            #connlimit: -1                  # optional, database connection limit, default -1 disable limit
            #pool_auth_user: dbuser_meta    # optional, all connection to this pgbouncer database will be authenticated by this user
            #pool_mode: transaction         # optional, pgbouncer pool mode at database level, default transaction
            #pool_size: 64                  # optional, pgbouncer pool size at database level, default 64
            #pool_size_reserve: 32          # optional, pgbouncer pool size reserve at database level, default 32
            #pool_size_min: 0               # optional, pgbouncer pool size min at database level, default 0
            #pool_max_db_conn: 100          # optional, max database connections at database level, default 100
          #- {name: bytebase ,owner: dbuser_bytebase ,revokeconn: true ,comment: bytebase primary database }

        # define HBA rules : https://doc.pgsty.com/pgsql/hba
        pg_hba_rules:
          - { user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes' }

        # define backup policies: https://doc.pgsty.com/pgsql/backup
        node_crontab: [ '00 01 * * * postgres /pg/bin/pg-backup full' ] # make a full backup every day 1am

        # define (OPTIONAL) L2 VIP that bind to primary
        pg_vip_enabled: true
        pg_vip_address: 10.10.10.2/24
        pg_vip_interface: eth1


    #----------------------------------------------#
    # INFRA : https://doc.pgsty.com/infra
    #----------------------------------------------#
    infra:
      hosts:
        10.10.10.10: { infra_seq: 1 }
      vars:
        repo_enabled: true # build local repo:  https://doc.pgsty.com/admin/repo
        #repo_extra_packages: [ pg18-main ,pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]
        repo_packages: [
          node-bootstrap, infra-package, infra-addons, node-package1, node-package2, pgsql-utility, extra-modules,
          pg18-full,pg18-time,pg18-gis,pg18-rag,pg18-fts,pg18-olap,pg18-feat,pg18-lang,pg18-type,pg18-util,pg18-func,pg18-admin,pg18-stat,pg18-sec,pg18-fdw,pg18-sim,pg18-etl,
          pg17-full,pg17-time,pg17-gis,pg17-rag,pg17-fts,pg17-olap,pg17-feat,pg17-lang,pg17-type,pg17-util,pg17-func,pg17-admin,pg17-stat,pg17-sec,pg17-fdw,pg17-sim,pg17-etl,
          pg16-full,pg16-time,pg16-gis,pg16-rag,pg16-fts,pg16-olap,pg16-feat,pg16-lang,pg16-type,pg16-util,pg16-func,pg16-admin,pg16-stat,pg16-sec,pg16-fdw,pg16-sim,pg16-etl,
          pg15-full,pg15-time,pg15-gis,pg15-rag,pg15-fts,pg15-olap,pg15-feat,pg15-lang,pg15-type,pg15-util,pg15-func,pg15-admin,pg15-stat,pg15-sec,pg15-fdw,pg15-sim,pg15-etl,
          pg14-full,pg14-time,pg14-gis,pg14-rag,pg14-fts,pg14-olap,pg14-feat,pg14-lang,pg14-type,pg14-util,pg14-func,pg14-admin,pg14-stat,pg14-sec,pg14-fdw,pg14-sim,pg14-etl,
          pg13-full,pg13-time,pg13-gis,pg13-rag,pg13-fts,pg13-olap,pg13-feat,pg13-lang,pg13-type,pg13-util,pg13-func,pg13-admin,pg13-stat,pg13-sec,pg13-fdw,pg13-sim,pg13-etl,
          infra-extra, kafka, java-runtime, sealos, tigerbeetle, polardb, ivorysql
        ]

    #----------------------------------------------#
    # ETCD : https://doc.pgsty.com/etcd
    #----------------------------------------------#
    etcd:
      hosts:
        10.10.10.10: { etcd_seq: 1 }
      vars:
        etcd_cluster: etcd
        etcd_safeguard: false             # prevent purging running etcd instance?

    #----------------------------------------------#
    # MINIO : https://doc.pgsty.com/minio
    #----------------------------------------------#
    minio:
      hosts:
        10.10.10.10: { minio_seq: 1 }
      vars:
        minio_cluster: minio
        minio_users:                      # list of minio user to be created
          - { access_key: pgbackrest  ,secret_key: S3User.Backup ,policy: pgsql }
          - { access_key: s3user_meta ,secret_key: S3User.Meta   ,policy: meta  }
          - { access_key: s3user_data ,secret_key: S3User.Data   ,policy: data  }

    #----------------------------------------------#
    # DOCKER : https://doc.pgsty.com/docker
    # APP    : https://doc.pgsty.com/app
    #----------------------------------------------#
    # OPTIONAL, launch example pgadmin app with: ./app.yml & ./app.yml -e app=bytebase
    app:
      hosts: { 10.10.10.10: {} }
      vars:
        docker_enabled: true                # enabled docker with ./docker.yml
        #docker_registry_mirrors: ["https://docker.1panel.live","https://docker.1ms.run","https://docker.xuanyuan.me","https://registry-1.docker.io"]
        app: pgadmin                        # specify the default app name to be installed (in the apps)
        apps:                               # define all applications, appname: definition

          # Admin GUI for PostgreSQL, launch with: ./app.yml
          pgadmin:                          # pgadmin app definition (app/pgadmin -> /opt/pgadmin)
            conf:                           # override /opt/pgadmin/.env
              PGADMIN_DEFAULT_EMAIL: [email protected]   # default user name
              PGADMIN_DEFAULT_PASSWORD: pigsty         # default password

          # Schema Migration GUI for PostgreSQL, launch with: ./app.yml -e app=bytebase
          bytebase:
            conf:
              BB_DOMAIN: http://ddl.pigsty  # replace it with your public domain name and postgres database url
              BB_PGURL: "postgresql://dbuser_bytebase:[email protected]:5432/bytebase?sslmode=prefer"


  #==============================================================#
  # Global Parameters
  #==============================================================#
  vars:

    #----------------------------------------------#
    # INFRA : https://doc.pgsty.com/infra
    #----------------------------------------------#
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default|china|europe
    proxy_env:                        # global proxy env when downloading packages
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.myqcloud.com,*.tsinghua.edu.cn"
      # http_proxy:  # set your proxy here: e.g http://user:[email protected]
      # https_proxy: # set your proxy here: e.g http://user:[email protected]
      # all_proxy:   # set your proxy here: e.g http://user:[email protected]

    certbot_sign: false               # enable certbot to sign https certificate for infra portal
    certbot_email: [email protected]     # replace your email address to receive expiration notice
    infra_portal:                     # domain names and upstream servers
      home         : { domain: i.pigsty }
      pgadmin      : { domain: adm.pigsty ,endpoint: "${admin_ip}:8885" }
      bytebase     : { domain: ddl.pigsty ,endpoint: "${admin_ip}:8887" ,websocket: true}
      minio        : { domain: m.pigsty ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }

      #website:   # static local website example stub
      #  domain: repo.pigsty              # external domain name for static site
      #  certbot: repo.pigsty             # use certbot to sign https certificate for this static site
      #  path: /www/pigsty                # path to the static site directory

      #supabase:  # dynamic upstream service example stub
      #  domain: supa.pigsty          # external domain name for upstream service
      #  certbot: supa.pigsty         # use certbot to sign https certificate for this upstream server
      #  endpoint: "10.10.10.10:8000" # path to the static site directory
      #  websocket: true              # add websocket support
      #  certbot: supa.pigsty         # certbot cert name, apply with `make cert`

    #----------------------------------------------#
    # NODE : https://doc.pgsty.com/node/param
    #----------------------------------------------#
    nodename_overwrite: true              # overwrite node hostname on multi-node template
    node_tune: oltp                       # node tuning specs: oltp,olap,tiny,crit
    node_etc_hosts:                       # add static domains to all nodes /etc/hosts
      - 10.10.10.10 i.pigsty sss.pigsty
      - 10.10.10.10 adm.pigsty ddl.pigsty repo.pigsty supa.pigsty
    node_repo_modules: local,node,infra,pgsql # use pre-made local repo rather than install from upstream
    node_repo_remove: true                # remove existing node repo for node managed by pigsty
    #node_packages: [openssh-server]      # packages to be installed current nodes with latest version
    #node_timezone: Asia/Hong_Kong        # overwrite node timezone

    #----------------------------------------------#
    # PGSQL : https://doc.pgsty.com/pgsql/param
    #----------------------------------------------#
    pg_version: 18                      # default postgres version
    pg_conf: oltp.yml                   # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    pg_safeguard: false                 # prevent purging running postgres instance?
    pg_packages: [ pgsql-main, pgsql-common ] # pg kernel and common utils
    #pg_extensions: [ pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]

    #----------------------------------------------#
    # BACKUP : https://doc.pgsty.com/pgsql/backup
    #----------------------------------------------#
    # if you want to use minio as backup repo instead of 'local' fs, uncomment this, and configure `pgbackrest_repo`
    # you can also use external object storage as backup repo
    pgbackrest_method: minio          # if you want to use minio as backup repo instead of 'local' fs, uncomment this
    pgbackrest_repo:                  # pgbackrest repo: https://pgbackrest.org/configuration.html#section-repository
      local:                          # default pgbackrest repo with local posix fs
        path: /pg/backup              # local backup directory, `/pg/backup` by default
        retention_full_type: count    # retention full backups by count
        retention_full: 2             # keep 2, at most 3 full backups when using local fs repo
      minio:                          # optional minio repo for pgbackrest
        type: s3                      # minio is s3-compatible, so s3 is used
        s3_endpoint: sss.pigsty       # minio endpoint domain name, `sss.pigsty` by default
        s3_region: us-east-1          # minio region, us-east-1 by default, useless for minio
        s3_bucket: pgsql              # minio bucket name, `pgsql` by default
        s3_key: pgbackrest            # minio user access key for pgbackrest [CHANGE ACCORDING to minio_users.pgbackrest]
        s3_key_secret: S3User.Backup  # minio user secret key for pgbackrest [CHANGE ACCORDING to minio_users.pgbackrest]
        s3_uri_style: path            # use path style uri for minio rather than host style
        path: /pgbackrest             # minio backup path, default is `/pgbackrest`
        storage_port: 9000            # minio port, 9000 by default
        storage_ca_file: /etc/pki/ca.crt  # minio ca file path, `/etc/pki/ca.crt` by default
        block: y                      # Enable block incremental backup
        bundle: y                     # bundle small files into a single file
        bundle_limit: 20MiB           # Limit for file bundles, 20MiB for object storage
        bundle_size: 128MiB           # Target size for file bundles, 128MiB for object storage
        cipher_type: aes-256-cbc      # enable AES encryption for remote backup repo
        cipher_pass: pgBackRest       # AES encryption password, default is 'pgBackRest'
        retention_full_type: time     # retention full backup by time on minio repo
        retention_full: 14            # keep full backup for the last 14 days
      s3:                             # you can use cloud object storage as backup repo
        type: s3                      # Add your object storage credentials here!
        s3_endpoint: oss-cn-beijing-internal.aliyuncs.com
        s3_region: oss-cn-beijing
        s3_bucket: <your_bucket_name>
        s3_key: <your_access_key>
        s3_key_secret: <your_secret_key>
        s3_uri_style: host
        path: /pgbackrest
        bundle: y                     # bundle small files into a single file
        bundle_limit: 20MiB           # Limit for file bundles, 20MiB for object storage
        bundle_size: 128MiB           # Target size for file bundles, 128MiB for object storage
        cipher_type: aes-256-cbc      # enable AES encryption for remote backup repo
        cipher_pass: pgBackRest       # AES encryption password, default is 'pgBackRest'
        retention_full_type: time     # retention full backup by time on minio repo
        retention_full: 14            # keep full backup for the last 14 days

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

配置解读

fat 模板是 Pigsty 的 全功能测试配置,专为完整性测试和离线包构建设计。

关键特性

  • 全扩展安装:安装 PostgreSQL 18 的所有分类扩展包
  • 多版本软件源:本地软件源包含 PostgreSQL 13-18 全部六个大版本
  • 完整组件栈:包含 MinIO 备份、Docker 应用、VIP 等功能
  • 企业级组件:包含 Kafka、PolarDB、IvorySQL、TigerBeetle 等

软件源内容

分类说明
PostgreSQL 13-18六个大版本的内核和全部扩展
扩展分类包time, gis, rag, fts, olap, feat, lang, type, util, func, admin, stat, sec, fdw, sim, etl
企业组件Kafka、Java 运行时、Sealos、TigerBeetle
数据库内核PolarDB、IvorySQL

与 rich 的区别

  • fat 包含 PostgreSQL 13-18 全部六个版本,rich 只包含当前默认版本
  • fat 包含额外的企业组件(Kafka、PolarDB、IvorySQL 等)
  • fat 需要更大的磁盘空间和更长的构建时间

适用场景

  • Pigsty 开发测试与功能验证
  • 构建完整的多版本离线软件包
  • 需要测试全部扩展兼容性的场景
  • 企业环境预先缓存所有软件包

注意事项

  • 需要较大磁盘空间(建议 100GB+)用于存储所有软件包
  • 构建本地软件源需要较长时间
  • 部分扩展在 ARM64 架构不可用
  • 默认密码为示例密码,生产环境务必修改

6 - infra

仅安装可观测性基础设施,不包含 PostgreSQL 与 etcd 的专用配置模板

infra 配置模板仅部署 Pigsty 的可观测性基础设施组件(VictoriaMetrics/Grafana/Loki/Nginx 等),不包含 PostgreSQL 与 etcd。

适用于需要独立监控栈的场景,例如监控外部 PostgreSQL/RDS 实例或其他数据源。


配置概览

  • 配置名称: infra
  • 节点数量: 单节点或多节点
  • 配置说明:仅安装可观测性基础设施,不包含 PostgreSQL 与 etcd
  • 适用系统:el8, el9, el10, d12, d13, u22, u24
  • 适用架构:x86_64, aarch64
  • 相关配置:meta

启用方式:

./configure -c infra [-i <primary_ip>]
./infra.yml    # 仅执行 infra 剧本

配置内容

源文件地址:pigsty/conf/infra.yml

---
#==============================================================#
# File      :   infra.yml
# Desc      :   Infra Only Config
# Ctime     :   2025-12-16
# Mtime     :   2025-12-30
# Docs      :   https://doc.pgsty.com/infra
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# This is the config template for deploy victoria stack alone
# tutorial: https://doc.pgsty.com/infra
#
# Usage:
#   curl https://repo.pigsty.io/get | bash
#   ./configure -c infra
#   ./infra.yml

all:
  children:
    infra:
      hosts:
        10.10.10.10: { infra_seq: 1 }
        #10.10.10.11: { infra_seq: 2 } # you can add more nodes if you want
        #10.10.10.12: { infra_seq: 3 } # don't forget to assign unique infra_seq for each node
      vars:
        docker_enabled: true            # enabled docker with ./docker.yml
        docker_registry_mirrors: ["https://docker.1panel.live","https://docker.1ms.run","https://docker.xuanyuan.me","https://registry-1.docker.io"]
        pg_exporters:     # bin/pgmon-add pg-rds
          20001: { pg_cluster: pg-rds ,pg_seq: 1 ,pg_host: 10.10.10.10 ,pg_exporter_url: 'postgres://postgres:[email protected]:5432/postgres' }

  vars:                                 # global variables
    version: v4.0.0                     # pigsty version string
    admin_ip: 10.10.10.10               # admin node ip address
    region: default                     # upstream mirror region: default,china,europe
    node_tune: oltp                     # node tuning specs: oltp,olap,tiny,crit
    infra_portal:                       # infra services exposed via portal
      home : { domain: i.pigsty }       # default domain name
    repo_enabled: false                 # online installation without repo
    node_repo_modules: node,infra,pgsql # add these repos directly
    #haproxy_enabled: false              # enable haproxy on infra node?
    #vector_enabled: false               # enable vector on infra node?

    # DON't FORGET TO CHANGE DEFAULT PASSWORDS!
    grafana_admin_password: pigsty
...

配置解读

infra 模板是 Pigsty 的 纯监控栈配置,专为独立部署可观测性基础设施设计。

适用场景

  • 监控外部 PostgreSQL 实例(RDS、自建等)
  • 需要独立的监控/告警平台
  • 已有 PostgreSQL 集群,仅需添加监控
  • 作为多集群监控的中央控制台

包含组件

  • VictoriaMetrics:时序数据库,存储监控指标
  • VictoriaLogs:日志聚合系统
  • VictoriaTraces:链路追踪系统
  • Grafana:可视化仪表盘
  • Alertmanager:告警管理
  • Nginx:反向代理和 Web 入口

不包含组件

  • PostgreSQL 数据库集群
  • etcd 分布式协调服务
  • MinIO 对象存储

监控外部实例: 配置完成后,可通过 pgsql-monitor.yml 剧本添加外部 PostgreSQL 实例的监控:

pg_exporters:
  20001: { pg_cluster: pg-foo, pg_seq: 1, pg_host: 10.10.10.100 }
  20002: { pg_cluster: pg-bar, pg_seq: 1, pg_host: 10.10.10.101 }

注意事项

  • 此模板不会安装任何数据库
  • 如需完整功能,请使用 metarich 模板
  • 可根据需要添加多个 infra 节点实现高可用

7 - 内核模版

8 - pgsql

原生 PostgreSQL 内核,支持 PostgreSQL 13 到 18 的多版本部署

pgsql 配置模板使用原生 PostgreSQL 内核,是 Pigsty 的默认数据库内核,支持 PostgreSQL 13 到 18 版本。


配置概览

  • 配置名称: pgsql
  • 节点数量: 单节点
  • 配置说明:原生 PostgreSQL 内核配置模板
  • 适用系统:el8, el9, el10, d12, d13, u22, u24
  • 适用架构:x86_64, aarch64
  • 相关配置:meta

启用方式:

./configure -c pgsql [-i <primary_ip>]

如需指定特定 PostgreSQL 版本(如 17):

./configure -c pgsql -v 17

配置内容

源文件地址:pigsty/conf/pgsql.yml

---
#==============================================================#
# File      :   pgsql.yml
# Desc      :   1-node PostgreSQL Config template
# Ctime     :   2025-02-23
# Mtime     :   2025-12-28
# Docs      :   https://doc.pgsty.com/config
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# This is the config template for basical PostgreSQL Kernel.
# Nothing special, just a basic setup with one node.
# tutorial: https://doc.pgsty.com/pgsql/kernel/postgres
#
# Usage:
#   curl https://repo.pigsty.io/get | bash
#   ./configure -c pgsql
#   ./deploy.yml

all:
  children:
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 }} ,vars: { repo_enabled: false }}
    etcd:  { hosts: { 10.10.10.10: { etcd_seq: 1  }} ,vars: { etcd_cluster: etcd  }}
    #minio: { hosts: { 10.10.10.10: { minio_seq: 1 }} ,vars: { minio_cluster: minio }}

    #----------------------------------------------#
    # PostgreSQL Cluster
    #----------------------------------------------#
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - { name: dbuser_meta ,password: DBUser.Meta   ,pgbouncer: true ,roles: [dbrole_admin   ] ,comment: pigsty admin user }
          - { name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [dbrole_readonly] ,comment: read-only viewer  }
        pg_databases:
          - { name: meta, baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [pigsty] ,extensions: [ postgis, timescaledb, vector ]}
        pg_extensions: [ postgis, timescaledb, pgvector, pg_wait_sampling ]
        pg_libs: 'timescaledb, pg_stat_statements, auto_explain, pg_wait_sampling'
        pg_hba_rules:
          - { user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes' }
        node_crontab: [ '00 01 * * * postgres /pg/bin/pg-backup full' ] # make a full backup every 1am

  vars:
    #----------------------------------------------#
    # INFRA : https://doc.pgsty.com/infra/param
    #----------------------------------------------#
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default,china,europe
    infra_portal:                     # infra services exposed via portal
      home : { domain: i.pigsty }     # default domain name

    #----------------------------------------------#
    # NODE : https://doc.pgsty.com/node/param
    #----------------------------------------------#
    nodename_overwrite: false             # do not overwrite node hostname on single node mode
    node_repo_modules: node,infra,pgsql # add these repos directly to the singleton node
    node_tune: oltp                     # node tuning specs: oltp,olap,tiny,crit

    #----------------------------------------------#
    # PGSQL : https://doc.pgsty.com/pgsql/param
    #----------------------------------------------#
    pg_version: 18                      # Default PostgreSQL Major Version is 18
    pg_conf: oltp.yml                   # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    pg_packages: [ pgsql-main, pgsql-common ]   # pg kernel and common utils
    #pg_extensions: [ pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

配置解读

pgsql 模板是 Pigsty 的 标准内核配置,使用社区原生 PostgreSQL。

版本支持

  • PostgreSQL 18(默认)
  • PostgreSQL 17、16、15、14、13

适用场景

  • 需要使用最新 PostgreSQL 特性
  • 需要最广泛的扩展支持
  • 标准生产环境部署
  • meta 模板功能相同,显式声明使用原生内核

与 meta 的区别

  • pgsql 模板显式声明使用原生 PostgreSQL 内核
  • 适合需要明确区分不同内核类型的场景

9 - citus

Citus 分布式 PostgreSQL 集群,提供水平扩展与分片能力

citus 配置模板使用 Citus 扩展部署分布式 PostgreSQL 集群,提供透明的水平扩展与数据分片能力。


配置概览

  • 配置名称: citus
  • 节点数量: 五节点(1 协调节点 + 4 数据节点)
  • 配置说明:Citus 分布式 PostgreSQL 集群
  • 适用系统:el8, el9, el10, d12, d13, u22, u24
  • 适用架构:x86_64
  • 相关配置:meta

启用方式:

./configure -c citus [-i <primary_ip>]

配置内容

源文件地址:pigsty/conf/citus.yml

---
#==============================================================#
# File      :   citus.yml
# Desc      :   1-node Citus (Distributive) Config Template
# Ctime     :   2020-05-22
# Mtime     :   2025-12-28
# Docs      :   https://doc.pgsty.com/config
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# This is the config template for Citus Distributive Cluster
# tutorial: https://doc.pgsty.com/pgsql/kernel/citus
#
# Usage:
#   curl https://repo.pigsty.io/get | bash
#   ./configure -c citus
#   ./deploy.yml

all:
  children:
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 }} ,vars: { repo_enabled: false }}
    etcd:  { hosts: { 10.10.10.10: { etcd_seq: 1  }} ,vars: { etcd_cluster: etcd  }}
    #minio: { hosts: { 10.10.10.10: { minio_seq: 1 }} ,vars: { minio_cluster: minio }}

    #----------------------------------------------#
    # pg-citus: 10 node citus cluster
    #----------------------------------------------#
    pg-citus: # the citus group contains 5 clusters
      hosts:
        10.10.10.10: { pg_group: 0, pg_cluster: pg-citus0 ,pg_vip_address: 10.10.10.60/24 ,pg_seq: 0, pg_role: primary }
        #10.10.10.11: { pg_group: 0, pg_cluster: pg-citus0 ,pg_vip_address: 10.10.10.60/24 ,pg_seq: 1, pg_role: replica }
        #10.10.10.12: { pg_group: 1, pg_cluster: pg-citus1 ,pg_vip_address: 10.10.10.61/24 ,pg_seq: 0, pg_role: primary }
        #10.10.10.13: { pg_group: 1, pg_cluster: pg-citus1 ,pg_vip_address: 10.10.10.61/24 ,pg_seq: 1, pg_role: replica }
        #10.10.10.14: { pg_group: 2, pg_cluster: pg-citus2 ,pg_vip_address: 10.10.10.62/24 ,pg_seq: 0, pg_role: primary }
        #10.10.10.15: { pg_group: 2, pg_cluster: pg-citus2 ,pg_vip_address: 10.10.10.62/24 ,pg_seq: 1, pg_role: replica }
        #10.10.10.16: { pg_group: 3, pg_cluster: pg-citus3 ,pg_vip_address: 10.10.10.63/24 ,pg_seq: 0, pg_role: primary }
        #10.10.10.17: { pg_group: 3, pg_cluster: pg-citus3 ,pg_vip_address: 10.10.10.63/24 ,pg_seq: 1, pg_role: replica }
        #10.10.10.18: { pg_group: 4, pg_cluster: pg-citus4 ,pg_vip_address: 10.10.10.64/24 ,pg_seq: 0, pg_role: primary }
        #10.10.10.19: { pg_group: 4, pg_cluster: pg-citus4 ,pg_vip_address: 10.10.10.64/24 ,pg_seq: 1, pg_role: replica }
      vars:
        pg_mode: citus                            # pgsql cluster mode: citus
        pg_shard: pg-citus                        # citus shard name: pg-citus
        pg_primary_db: citus                      # primary database used by citus
        pg_dbsu_password: DBUser.Postgres         # enable dbsu password access for citus
        pg_extensions: [ citus, postgis, pgvector, topn, pg_cron, hll ]  # install these extensions
        pg_libs: 'citus, pg_cron, pg_stat_statements' # citus will be added by patroni automatically
        pg_users: [{ name: dbuser_citus ,password: DBUser.Citus ,pgbouncer: true ,roles: [ dbrole_admin ]    }]
        pg_databases: [{ name: citus ,owner: dbuser_citus ,extensions: [ citus, vector, topn, pg_cron, hll ] }]
        pg_parameters:
          cron.database_name: citus
          citus.node_conninfo: 'sslrootcert=/pg/cert/ca.crt sslmode=verify-full'
        pg_hba_rules:
          - { user: 'all' ,db: all  ,addr: 127.0.0.1/32  ,auth: ssl ,title: 'all user ssl access from localhost' }
          - { user: 'all' ,db: all  ,addr: intra         ,auth: ssl ,title: 'all user ssl access from intranet'  }
        pg_vip_enabled: true                      # enable vip for citus cluster
        pg_vip_interface: eth1                    # vip interface for all members (you can override this in each host)

  vars:                               # global variables
    #----------------------------------------------#
    # INFRA : https://doc.pgsty.com/infra/param
    #----------------------------------------------#
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default,china,europe
    infra_portal:                     # infra services exposed via portal
      home : { domain: i.pigsty }     # default domain name

    #----------------------------------------------#
    # NODE : https://doc.pgsty.com/node/param
    #----------------------------------------------#
    nodename_overwrite: true            # overwrite hostname since this is a multi-node tempalte
    node_repo_modules: node,infra,pgsql # add these repos directly to the singleton node
    node_tune: oltp                     # node tuning specs: oltp,olap,tiny,crit

    #----------------------------------------------#
    # PGSQL : https://doc.pgsty.com/pgsql/param
    #----------------------------------------------#
    pg_version: 17                      # Default PostgreSQL Major Version is 17
    pg_conf: oltp.yml                   # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    pg_packages: [ pgsql-main, pgsql-common ]   # pg kernel and common utils
    #pg_extensions: [ pg17-time ,pg17-gis ,pg17-rag ,pg17-fts ,pg17-olap ,pg17-feat ,pg17-lang ,pg17-type ,pg17-util ,pg17-func ,pg17-admin ,pg17-stat ,pg17-sec ,pg17-fdw ,pg17-sim ,pg17-etl]
    #repo_extra_packages: [ pgsql-main, citus, postgis, pgvector, pg_cron, hll, topn ]

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

配置解读

citus 模板部署 Citus 分布式 PostgreSQL 集群,适合需要水平扩展的大规模数据场景。

关键特性

  • 透明数据分片,自动分布数据到多个节点
  • 并行查询执行,聚合多节点结果
  • 支持分布式事务(2PC)
  • 保持 PostgreSQL SQL 兼容性

架构说明

  • 协调节点 (pg-citus0):接收查询,路由到数据节点
  • 数据节点 (pg-citus1~3):存储分片数据

适用场景

  • 单表数据量超过单机容量
  • 需要水平扩展写入和查询性能
  • 多租户 SaaS 应用
  • 实时分析型工作负载

注意事项

  • Citus 支持 PostgreSQL 14~17
  • 分布式表需要指定分布列
  • 部分 PostgreSQL 特性可能受限(如外键跨分片)
  • 不支持 ARM64 架构

10 - mssql

WiltonDB / Babelfish 内核,提供 Microsoft SQL Server 协议与语法兼容能力

mssql 配置模板使用 WiltonDB / Babelfish 数据库内核替代原生 PostgreSQL,提供 Microsoft SQL Server 线缆协议(TDS)与 T-SQL 语法兼容能力。

完整教程请参考:Babelfish (MSSQL) 内核使用说明


配置概览

  • 配置名称: mssql
  • 节点数量: 单节点
  • 配置说明:WiltonDB / Babelfish 配置模板,提供 SQL Server 协议兼容
  • 适用系统:el8, el9, el10, u22, u24 (Debian 不可用)
  • 适用架构:x86_64
  • 相关配置:meta

启用方式:

./configure -c mssql [-i <primary_ip>]

配置内容

源文件地址:pigsty/conf/mssql.yml

---
#==============================================================#
# File      :   mssql.yml
# Desc      :   Babelfish: WiltonDB (MSSQL Compatible) template
# Ctime     :   2020-08-01
# Mtime     :   2025-12-28
# Docs      :   https://doc.pgsty.com/config
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# This is the config template for Babelfish Kernel (WiltonDB),
# Which is a PostgreSQL 15 fork with SQL Server Compatibility
# tutorial: https://doc.pgsty.com/pgsql/kernel/babelfish
#
# Usage:
#   curl https://repo.pigsty.io/get | bash
#   ./configure -c mssql
#   ./deploy.yml

all:
  children:
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 }} ,vars: { repo_enabled: false }}
    etcd:  { hosts: { 10.10.10.10: { etcd_seq: 1  }} ,vars: { etcd_cluster: etcd  }}
    #minio: { hosts: { 10.10.10.10: { minio_seq: 1 }} ,vars: { minio_cluster: minio }}

    #----------------------------------------------#
    # Babelfish Database Cluster
    #----------------------------------------------#
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - {name: dbuser_mssql ,password: DBUser.MSSQL ,superuser: true, pgbouncer: true ,roles: [dbrole_admin], comment: superuser & owner for babelfish  }
        pg_databases:
          - name: mssql
            baseline: mssql.sql
            extensions: [uuid-ossp, babelfishpg_common, babelfishpg_tsql, babelfishpg_tds, babelfishpg_money, pg_hint_plan, system_stats, tds_fdw]
            owner: dbuser_mssql
            parameters: { 'babelfishpg_tsql.migration_mode' : 'multi-db' }
            comment: babelfish cluster, a MSSQL compatible pg cluster
        node_crontab: [ '00 01 * * * postgres /pg/bin/pg-backup full' ] # make a full backup every 1am

        # Babelfish / WiltonDB Ad Hoc Settings
        pg_mode: mssql                     # Microsoft SQL Server Compatible Mode
        pg_version: 15
        pg_packages: [ wiltondb, pgsql-common, sqlcmd ]
        pg_libs: 'babelfishpg_tds, pg_stat_statements, auto_explain' # add timescaledb to shared_preload_libraries
        pg_default_hba_rules: # overwrite default HBA rules for babelfish cluster, order by `order`
          - { user: '${dbsu}'    ,db: all         ,addr: local     ,auth: ident ,title: 'dbsu access via local os user ident' ,order: 100}
          - { user: '${dbsu}'    ,db: replication ,addr: local     ,auth: ident ,title: 'dbsu replication from local os ident' ,order: 150}
          - { user: '${repl}'    ,db: replication ,addr: localhost ,auth: pwd   ,title: 'replicator replication from localhost' ,order: 200}
          - { user: '${repl}'    ,db: replication ,addr: intra     ,auth: pwd   ,title: 'replicator replication from intranet' ,order: 250}
          - { user: '${repl}'    ,db: postgres    ,addr: intra     ,auth: pwd   ,title: 'replicator postgres db from intranet' ,order: 300}
          - { user: '${monitor}' ,db: all         ,addr: localhost ,auth: pwd   ,title: 'monitor from localhost with password' ,order: 350}
          - { user: '${monitor}' ,db: all         ,addr: infra     ,auth: pwd   ,title: 'monitor from infra host with password' ,order: 400}
          - { user: '${admin}'   ,db: all         ,addr: infra     ,auth: ssl   ,title: 'admin @ infra nodes with pwd & ssl' ,order: 450}
          - { user: '${admin}'   ,db: all         ,addr: world     ,auth: ssl   ,title: 'admin @ everywhere with ssl & pwd' ,order: 500}
          - { user: dbuser_mssql ,db: mssql       ,addr: intra     ,auth: md5   ,title: 'allow mssql dbsu intranet access' ,order: 525} # <--- use md5 auth method for mssql user
          - { user: '+dbrole_readonly',db: all    ,addr: localhost ,auth: pwd   ,title: 'pgbouncer read/write via local socket' ,order: 550}
          - { user: '+dbrole_readonly',db: all    ,addr: intra     ,auth: pwd   ,title: 'read/write biz user via password' ,order: 600}
          - { user: '+dbrole_offline' ,db: all    ,addr: intra     ,auth: pwd   ,title: 'allow etl offline tasks from intranet' ,order: 650}
        pg_default_services: # route primary & replica service to mssql port 1433
          - { name: primary ,port: 5433 ,dest: 1433  ,check: /primary   ,selector: "[]" }
          - { name: replica ,port: 5434 ,dest: 1433  ,check: /read-only ,selector: "[]" , backup: "[? pg_role == `primary` || pg_role == `offline` ]" }
          - { name: default ,port: 5436 ,dest: postgres ,check: /primary   ,selector: "[]" }
          - { name: offline ,port: 5438 ,dest: postgres ,check: /replica   ,selector: "[? pg_role == `offline` || pg_offline_query ]" , backup: "[? pg_role == `replica` && !pg_offline_query]" }

  vars:
    #----------------------------------------------#
    # INFRA : https://doc.pgsty.com/infra/param
    #----------------------------------------------#
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default,china,europe
    infra_portal:                     # infra services exposed via portal
      home : { domain: i.pigsty }     # default domain name

    #----------------------------------------------#
    # NODE : https://doc.pgsty.com/node/param
    #----------------------------------------------#
    nodename_overwrite: false                 # do not overwrite node hostname on single node mode
    node_repo_modules: node,infra,pgsql,mssql # extra mssql repo is required
    node_tune: oltp                           # node tuning specs: oltp,olap,tiny,crit

    #----------------------------------------------#
    # PGSQL : https://doc.pgsty.com/pgsql/param
    #----------------------------------------------#
    pg_version: 15                            # Babelfish kernel is compatible with postgres 15
    pg_conf: oltp.yml                         # pgsql tuning specs: {oltp,olap,tiny,crit}.yml

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

配置解读

mssql 模板让您可以使用 SQL Server Management Studio (SSMS) 或其他 SQL Server 客户端工具连接 PostgreSQL。

关键特性

  • 使用 TDS 协议(端口 1433),兼容 SQL Server 客户端
  • 支持 T-SQL 语法,迁移成本低
  • 保留 PostgreSQL 的 ACID 特性和扩展生态
  • 支持 multi-dbsingle-db 两种迁移模式

连接方式

# 使用 sqlcmd 命令行工具
sqlcmd -S 10.10.10.10,1433 -U dbuser_mssql -P DBUser.MSSQL -d mssql

# 使用 SSMS 或 Azure Data Studio
# Server: 10.10.10.10,1433
# Authentication: SQL Server Authentication
# Login: dbuser_mssql
# Password: DBUser.MSSQL

适用场景

  • 从 SQL Server 迁移到 PostgreSQL
  • 需要同时支持 SQL Server 和 PostgreSQL 客户端的应用
  • 希望利用 PostgreSQL 生态同时保持 T-SQL 兼容性

注意事项

  • WiltonDB 基于 PostgreSQL 15,不支持更高版本特性
  • 部分 T-SQL 语法可能存在兼容性差异,请参考 Babelfish 兼容性文档
  • 需要使用 md5 认证方式(而非 scram-sha-256

11 - polar

PolarDB for PostgreSQL 内核,提供 Aurora 风格的存算分离能力

polar 配置模板使用阿里云 PolarDB for PostgreSQL 数据库内核替代原生 PostgreSQL,提供"云原生" Aurora 风格的存算分离能力。

完整教程请参考:PolarDB for PostgreSQL (POLAR) 内核使用说明


配置概览

  • 配置名称: polar
  • 节点数量: 单节点
  • 配置说明:使用 PolarDB for PostgreSQL 内核
  • 适用系统:el8, el9, el10, d12, d13, u22, u24
  • 适用架构:x86_64
  • 相关配置:meta

启用方式:

./configure -c polar [-i <primary_ip>]

配置内容

源文件地址:pigsty/conf/polar.yml

---
#==============================================================#
# File      :   polar.yml
# Desc      :   Pigsty 1-node PolarDB Kernel Config Template
# Ctime     :   2020-08-05
# Mtime     :   2025-12-28
# Docs      :   https://doc.pgsty.com/config
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# This is the config template for PolarDB PG Kernel,
# Which is a PostgreSQL 15 fork with RAC flavor features
# tutorial: https://doc.pgsty.com/pgsql/kernel/polardb
#
# Usage:
#   curl https://repo.pigsty.io/get | bash
#   ./configure -c polar
#   ./deploy.yml

all:
  children:
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 }} ,vars: { repo_enabled: false }}
    etcd:  { hosts: { 10.10.10.10: { etcd_seq: 1  }} ,vars: { etcd_cluster: etcd  }}
    #minio: { hosts: { 10.10.10.10: { minio_seq: 1 }} ,vars: { minio_cluster: minio }}

    #----------------------------------------------#
    # PolarDB Database Cluster
    #----------------------------------------------#
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - {name: dbuser_meta ,password: DBUser.Meta   ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: pigsty admin user }
          - {name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [dbrole_readonly] ,comment: read-only viewer for meta database }
        pg_databases:
          - {name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [pigsty]}
        pg_hba_rules:
          - {user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes'}
        node_crontab: [ '00 01 * * * postgres /pg/bin/pg-backup full' ] # make a full backup every 1am

        # PolarDB Ad Hoc Settings
        pg_version: 15                            # PolarDB PG is based on PG 15
        pg_mode: polar                            # PolarDB PG Compatible mode
        pg_packages: [ polardb, pgsql-common ]    # Replace PG kernel with PolarDB kernel
        pg_exporter_exclude_database: 'template0,template1,postgres,polardb_admin'
        pg_default_roles:                         # PolarDB require replicator as superuser
          - { name: dbrole_readonly  ,login: false ,comment: role for global read-only access     }
          - { name: dbrole_offline   ,login: false ,comment: role for restricted read-only access }
          - { name: dbrole_readwrite ,login: false ,roles: [dbrole_readonly] ,comment: role for global read-write access }
          - { name: dbrole_admin     ,login: false ,roles: [pg_monitor, dbrole_readwrite] ,comment: role for object creation }
          - { name: postgres     ,superuser: true  ,comment: system superuser }
          - { name: replicator   ,superuser: true  ,replication: true ,roles: [pg_monitor, dbrole_readonly] ,comment: system replicator } # <- superuser is required for replication
          - { name: dbuser_dba   ,superuser: true  ,roles: [dbrole_admin]  ,pgbouncer: true ,pool_mode: session, pool_connlimit: 16 ,comment: pgsql admin user }
          - { name: dbuser_monitor ,roles: [pg_monitor] ,pgbouncer: true ,parameters: {log_min_duration_statement: 1000 } ,pool_mode: session ,pool_connlimit: 8 ,comment: pgsql monitor user }

  vars:                               # global variables
    #----------------------------------------------#
    # INFRA : https://doc.pgsty.com/infra/param
    #----------------------------------------------#
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default,china,europe
    infra_portal:                     # infra services exposed via portal
      home : { domain: i.pigsty }     # default domain name

    #----------------------------------------------#
    # NODE : https://doc.pgsty.com/node/param
    #----------------------------------------------#
    nodename_overwrite: false           # do not overwrite node hostname on single node mode
    node_repo_modules: node,infra,pgsql # add these repos directly to the singleton node
    node_tune: oltp                     # node tuning specs: oltp,olap,tiny,crit

    #----------------------------------------------#
    # PGSQL : https://doc.pgsty.com/pgsql/param
    #----------------------------------------------#
    pg_version: 15                      # PolarDB is compatible with PG 15
    pg_conf: oltp.yml                   # pgsql tuning specs: {oltp,olap,tiny,crit}.yml

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root

...

配置解读

polar 模板使用阿里云开源的 PolarDB for PostgreSQL 内核,提供云原生数据库能力。

关键特性

  • 存算分离架构,计算节点和存储节点可独立扩展
  • 支持一写多读,读副本秒级扩展
  • 兼容 PostgreSQL 生态,保持 SQL 兼容性
  • 支持共享存储场景,适合云环境部署

适用场景

  • 需要存算分离架构的云原生场景
  • 读多写少的业务负载
  • 需要快速扩展读副本的场景
  • 评估 PolarDB 特性的测试环境

注意事项

  • PolarDB 基于 PostgreSQL 15,不支持更高版本特性
  • 复制用户需要超级用户权限(与原生 PostgreSQL 不同)
  • 部分 PostgreSQL 扩展可能存在兼容性问题
  • 不支持 ARM64 架构

12 - ivory

IvorySQL 内核,提供 Oracle 语法与 PL/SQL 兼容能力

ivory 配置模板使用瀚高的 IvorySQL 数据库内核替代原生 PostgreSQL,提供 Oracle 语法与 PL/SQL 兼容能力。

完整教程请参考:IvorySQL (Oracle兼容) 内核使用说明


配置概览

  • 配置名称: ivory
  • 节点数量: 单节点
  • 配置说明:使用 IvorySQL Oracle 兼容内核
  • 适用系统:el8, el9, el10, d12, d13, u22, u24
  • 适用架构:x86_64, aarch64
  • 相关配置:meta

启用方式:

./configure -c ivory [-i <primary_ip>]

配置内容

源文件地址:pigsty/conf/ivory.yml

---
#==============================================================#
# File      :   ivory.yml
# Desc      :   IvorySQL 4 (Oracle Compatible) template
# Ctime     :   2024-08-05
# Mtime     :   2025-12-28
# Docs      :   https://doc.pgsty.com/pgsql/kernel/ivorysql
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# This is the config template for IvorySQL 5 Kernel,
# Which is a PostgreSQL 18 fork with Oracle Compatibility
# tutorial: https://doc.pgsty.com/pgsql/kernel/ivorysql
#
# Usage:
#   curl https://repo.pigsty.io/get | bash
#   ./configure -c ivory
#   ./deploy.yml

all:
  children:
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 }} ,vars: { repo_enabled: false }}
    etcd:  { hosts: { 10.10.10.10: { etcd_seq: 1  }} ,vars: { etcd_cluster: etcd  }}
    #minio: { hosts: { 10.10.10.10: { minio_seq: 1 }} ,vars: { minio_cluster: minio }}

    #----------------------------------------------#
    # IvorySQL Database Cluster
    #----------------------------------------------#
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - {name: dbuser_meta ,password: DBUser.Meta   ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: pigsty admin user }
          - {name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [dbrole_readonly] ,comment: read-only viewer for meta database }
        pg_databases:
          - {name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [pigsty]}
        pg_hba_rules:
          - {user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes'}
        node_crontab: [ '00 01 * * * postgres /pg/bin/pg-backup full' ] # make a full backup every 1am

        # IvorySQL Ad Hoc Settings
        pg_mode: ivory                                                 # Use IvorySQL Oracle Compatible Mode
        pg_packages: [ ivorysql, pgsql-common ]                        # install IvorySQL instead of postgresql kernel
        pg_libs: 'liboracle_parser, pg_stat_statements, auto_explain'  # pre-load oracle parser

  vars:                               # global variables
    #----------------------------------------------#
    # INFRA : https://doc.pgsty.com/infra/param
    #----------------------------------------------#
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default,china,europe
    infra_portal:                     # infra services exposed via portal
      home : { domain: i.pigsty }     # default domain name

    #----------------------------------------------#
    # NODE : https://doc.pgsty.com/node/param
    #----------------------------------------------#
    nodename_overwrite: false           # do not overwrite node hostname on single node mode
    node_repo_modules: node,infra,pgsql # add these repos directly to the singleton node
    node_tune: oltp                     # node tuning specs: oltp,olap,tiny,crit

    #----------------------------------------------#
    # PGSQL : https://doc.pgsty.com/pgsql/param
    #----------------------------------------------#
    pg_version: 18                      # IvorySQL kernel is compatible with postgres 18
    pg_conf: oltp.yml                   # pgsql tuning specs: {oltp,olap,tiny,crit}.yml

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

配置解读

ivory 模板使用瀚高开源的 IvorySQL 内核,提供 Oracle 数据库兼容能力。

关键特性

  • 支持 Oracle PL/SQL 语法
  • 兼容 Oracle 数据类型(NUMBER、VARCHAR2 等)
  • 支持 Oracle 风格的包(Package)
  • 保留 PostgreSQL 的所有标准功能

适用场景

  • 从 Oracle 迁移到 PostgreSQL
  • 需要同时支持 Oracle 和 PostgreSQL 语法的应用
  • 希望利用 PostgreSQL 生态同时保持 PL/SQL 兼容性
  • 评估 IvorySQL 特性的测试环境

注意事项

  • IvorySQL 4 基于 PostgreSQL 18
  • 使用 liboracle_parser 需要加载到 shared_preload_libraries
  • pgbackrest 在 Oracle 兼容模式下可能存在校验问题,PITR 能力受限
  • 仅支持 EL8/EL9 系统,不支持 Debian/Ubuntu

13 - mysql

OpenHalo 内核,提供 MySQL 协议与语法兼容能力

mysql 配置模板使用 OpenHalo 数据库内核替代原生 PostgreSQL,提供 MySQL 线缆协议与 SQL 语法兼容能力。


配置概览

  • 配置名称: mysql
  • 节点数量: 单节点
  • 配置说明:OpenHalo MySQL 兼容内核配置
  • 适用系统:el8, el9, el10, d12, d13, u22, u24
  • 适用架构:x86_64
  • 相关配置:meta

启用方式:

./configure -c mysql [-i <primary_ip>]

配置内容

源文件地址:pigsty/conf/mysql.yml

---
#==============================================================#
# File      :   mysql.yml
# Desc      :   1-node OpenHaloDB (MySQL Compatible) template
# Ctime     :   2025-04-03
# Mtime     :   2025-12-28
# Docs      :   https://doc.pgsty.com/config
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# This is the config template for OpenHalo PG Kernel,
# Which is a PostgreSQL 14 fork with MySQL Wire Compatibility
# tutorial: https://doc.pgsty.com/pgsql/kernel/openhalo
#
# Usage:
#   curl https://repo.pigsty.io/get | bash
#   ./configure -c mysql
#   ./deploy.yml

all:
  children:
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 }} ,vars: { repo_enabled: false }}
    etcd:  { hosts: { 10.10.10.10: { etcd_seq: 1  }} ,vars: { etcd_cluster: etcd  }}
    #minio: { hosts: { 10.10.10.10: { minio_seq: 1 }} ,vars: { minio_cluster: minio }}

    #----------------------------------------------#
    # OpenHalo Database Cluster
    #----------------------------------------------#
    # connect with mysql client: mysql -h 10.10.10.10 -u dbuser_meta -D mysql (the actual database is 'postgres', and 'mysql' is a schema)
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - {name: dbuser_meta ,password: DBUser.Meta   ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: pigsty admin user }
          - {name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [dbrole_readonly] ,comment: read-only viewer for meta database }
        pg_databases:
          - {name: postgres, extensions: [aux_mysql]} # the mysql compatible database
          - {name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [pigsty]}
        pg_hba_rules:
          - {user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes'}
        node_crontab: [ '00 01 * * * postgres /pg/bin/pg-backup full' ] # make a full backup every 1am

        # OpenHalo Ad Hoc Setting
        pg_mode: mysql                    # MySQL Compatible Mode by HaloDB
        pg_version: 14                    # The current HaloDB is compatible with PG Major Version 14
        pg_packages: [ openhalodb, pgsql-common ]  # install openhalodb instead of postgresql kernel

  vars:
    #----------------------------------------------#
    # INFRA : https://doc.pgsty.com/infra/param
    #----------------------------------------------#
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default,china,europe
    infra_portal:                     # infra services exposed via portal
      home : { domain: i.pigsty }     # default domain name

    #----------------------------------------------#
    # NODE : https://doc.pgsty.com/node/param
    #----------------------------------------------#
    nodename_overwrite: false           # do not overwrite node hostname on single node mode
    node_repo_modules: node,infra,pgsql # add these repos directly to the singleton node
    node_tune: oltp                     # node tuning specs: oltp,olap,tiny,crit

    #----------------------------------------------#
    # PGSQL : https://doc.pgsty.com/pgsql/param
    #----------------------------------------------#
    pg_version: 14                      # OpenHalo is compatible with PG 14
    pg_conf: oltp.yml                   # pgsql tuning specs: {oltp,olap,tiny,crit}.yml

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

配置解读

mysql 模板使用 OpenHalo 内核,让您可以使用 MySQL 客户端工具连接 PostgreSQL。

关键特性

  • 使用 MySQL 协议(端口 3306),兼容 MySQL 客户端
  • 支持 MySQL SQL 语法子集
  • 保留 PostgreSQL 的 ACID 特性和存储引擎
  • 同时支持 PostgreSQL 和 MySQL 两种协议连接

连接方式

# 使用 MySQL 客户端
mysql -h 10.10.10.10 -P 3306 -u dbuser_meta -pDBUser.Meta

# 同时保留 PostgreSQL 连接能力
psql postgres://dbuser_meta:[email protected]:5432/meta

适用场景

  • 从 MySQL 迁移到 PostgreSQL
  • 需要同时支持 MySQL 和 PostgreSQL 客户端的应用
  • 希望利用 PostgreSQL 生态同时保持 MySQL 兼容性

注意事项

  • OpenHalo 基于 PostgreSQL 14,不支持更高版本特性
  • 部分 MySQL 语法可能存在兼容性差异
  • 仅支持 EL8/EL9 系统
  • 不支持 ARM64 架构

14 - pgtde

Percona PostgreSQL 内核,提供透明数据加密 (pg_tde) 能力

pgtde 配置模板使用 Percona PostgreSQL 数据库内核,提供透明数据加密 (Transparent Data Encryption, TDE) 能力。


配置概览

  • 配置名称: pgtde
  • 节点数量: 单节点
  • 配置说明:Percona PostgreSQL 透明数据加密配置
  • 适用系统:el8, el9, el10, d12, d13, u22, u24
  • 适用架构:x86_64
  • 相关配置:meta

启用方式:

./configure -c pgtde [-i <primary_ip>]

配置内容

源文件地址:pigsty/conf/pgtde.yml

---
#==============================================================#
# File      :   pgtde.yml
# Desc      :   PG TDE with Percona PostgreSQL 1-node template
# Ctime     :   2025-07-04
# Mtime     :   2025-12-28
# Docs      :   https://doc.pgsty.com/config
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# This is the config template for Percona PostgreSQL Distribution
# With pg_tde extension, which is compatible with PostgreSQL 18.1
# tutorial: https://doc.pgsty.com/pgsql/kernel/percona
#
# Usage:
#   curl https://repo.pigsty.io/get | bash
#   ./configure -c pgtde
#   ./deploy.yml

all:
  children:
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 }} ,vars: { repo_enabled: false }}
    etcd:  { hosts: { 10.10.10.10: { etcd_seq: 1  }} ,vars: { etcd_cluster: etcd  }}
    #minio: { hosts: { 10.10.10.10: { minio_seq: 1 }} ,vars: { minio_cluster: minio }}

    #----------------------------------------------#
    # Percona Postgres Database Cluster
    #----------------------------------------------#
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - { name: dbuser_meta ,password: DBUser.Meta   ,pgbouncer: true ,roles: [dbrole_admin   ] ,comment: pigsty admin user }
          - { name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [dbrole_readonly] ,comment: read-only viewer  }
        pg_databases:
          - name: meta
            baseline: cmdb.sql
            comment: pigsty tde database
            schemas: [pigsty]
            extensions: [ vector, postgis, pg_tde ,pgaudit, { name: pg_stat_monitor, schema: monitor } ]
        pg_hba_rules:
          - { user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes' }
        node_crontab: [ '00 01 * * * postgres /pg/bin/pg-backup full' ] # make a full backup every 1am

        # Percona PostgreSQL TDE Ad Hoc Settings
        pg_packages: [ percona-main, pgsql-common ]  # install percona postgres packages
        pg_libs: 'pg_tde, pgaudit, pg_stat_statements, pg_stat_monitor, auto_explain'

  vars:
    #----------------------------------------------#
    # INFRA : https://doc.pgsty.com/infra/param
    #----------------------------------------------#
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default,china,europe
    infra_portal:                     # infra services exposed via portal
      home : { domain: i.pigsty }     # default domain name

    #----------------------------------------------#
    # NODE : https://doc.pgsty.com/node/param
    #----------------------------------------------#
    nodename_overwrite: false             # do not overwrite node hostname on single node mode
    node_repo_modules: node,infra,pgsql,percona
    node_tune: oltp

    #----------------------------------------------#
    # PGSQL : https://doc.pgsty.com/pgsql/param
    #----------------------------------------------#
    pg_version: 18                      # Default Percona TDE PG Major Version is 18
    pg_conf: oltp.yml                   # pgsql tuning specs: {oltp,olap,tiny,crit}.yml

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

配置解读

pgtde 模板使用 Percona PostgreSQL 内核,提供企业级透明数据加密能力。

关键特性

  • 透明数据加密:数据在磁盘上自动加密,对应用透明
  • 密钥管理:支持本地密钥和外部密钥管理系统 (KMS)
  • 表级加密:可选择性加密敏感表
  • 完整兼容:与原生 PostgreSQL 完全兼容

适用场景

  • 需要满足数据安全合规要求(如 PCI-DSS、HIPAA)
  • 存储敏感数据(如个人信息、金融数据)
  • 需要静态数据加密的场景
  • 对数据安全有严格要求的企业环境

使用方法

-- 创建加密表
CREATE TABLE sensitive_data (
    id SERIAL PRIMARY KEY,
    ssn VARCHAR(11)
) USING pg_tde;

-- 或对现有表启用加密
ALTER TABLE existing_table SET ACCESS METHOD pg_tde;

注意事项

  • Percona PostgreSQL 基于 PostgreSQL 18
  • 加密会带来一定性能开销(通常 5-15%)
  • 需要妥善管理加密密钥
  • 不支持 ARM64 架构

15 - oriole

OrioleDB 内核,提供无膨胀的 OLTP 增强存储引擎

oriole 配置模板使用 OrioleDB 存储引擎替代 PostgreSQL 默认的 Heap 存储,提供无膨胀、高性能的 OLTP 能力。


配置概览

  • 配置名称: oriole
  • 节点数量: 单节点
  • 配置说明:OrioleDB 无膨胀存储引擎配置
  • 适用系统:el8, el9, el10, d12, d13, u22, u24
  • 适用架构:x86_64
  • 相关配置:meta

启用方式:

./configure -c oriole [-i <primary_ip>]

配置内容

源文件地址:pigsty/conf/oriole.yml

---
#==============================================================#
# File      :   oriole.yml
# Desc      :   1-node OrioleDB (OLTP Enhancement) template
# Ctime     :   2025-04-05
# Mtime     :   2025-12-28
# Docs      :   https://doc.pgsty.com/config
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# This is the config template for OrioleDB Kernel,
# Which is a Patched PostgreSQL 17 fork without bloat
# tutorial: https://doc.pgsty.com/pgsql/kernel/oriole
#
# Usage:
#   curl https://repo.pigsty.io/get | bash
#   ./configure -c oriole
#   ./deploy.yml

all:
  children:
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 }} ,vars: { repo_enabled: false }}
    etcd:  { hosts: { 10.10.10.10: { etcd_seq: 1  }} ,vars: { etcd_cluster: etcd  }}
    #minio: { hosts: { 10.10.10.10: { minio_seq: 1 }} ,vars: { minio_cluster: minio }}

    #----------------------------------------------#
    # OrioleDB Database Cluster
    #----------------------------------------------#
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - {name: dbuser_meta ,password: DBUser.Meta   ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: pigsty admin user }
          - {name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [dbrole_readonly] ,comment: read-only viewer for meta database }
        pg_databases:
          - {name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [pigsty], extensions: [orioledb]}
        pg_hba_rules:
          - {user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes'}
        node_crontab: [ '00 01 * * * postgres /pg/bin/pg-backup full' ] # make a full backup every 1am

        # OrioleDB Ad Hoc Settings
        pg_mode: oriole                                         # oriole compatible mode
        pg_packages: [ oriole, pgsql-common ]                   # install OrioleDB kernel
        pg_libs: 'orioledb, pg_stat_statements, auto_explain'   # Load OrioleDB Extension

  vars:                               # global variables
    #----------------------------------------------#
    # INFRA : https://doc.pgsty.com/infra/param
    #----------------------------------------------#
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default,china,europe
    infra_portal:                     # infra services exposed via portal
      home : { domain: i.pigsty }     # default domain name

    #----------------------------------------------#
    # NODE : https://doc.pgsty.com/node/param
    #----------------------------------------------#
    nodename_overwrite: false           # do not overwrite node hostname on single node mode
    node_repo_modules: node,infra,pgsql # add these repos directly to the singleton node
    node_tune: oltp                     # node tuning specs: oltp,olap,tiny,crit

    #----------------------------------------------#
    # PGSQL : https://doc.pgsty.com/pgsql/param
    #----------------------------------------------#
    pg_version: 17                      # OrioleDB Kernel is based on PG 17
    pg_conf: oltp.yml                   # pgsql tuning specs: {oltp,olap,tiny,crit}.yml

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

配置解读

oriole 模板使用 OrioleDB 存储引擎,从根本上解决 PostgreSQL 表膨胀问题。

关键特性

  • 无膨胀设计:使用 UNDO 日志而非多版本并发控制 (MVCC)
  • 无需 VACUUM:消除 autovacuum 带来的性能抖动
  • 行级 WAL:更高效的日志记录和复制
  • 压缩存储:内置数据压缩,减少存储空间

适用场景

  • 高频更新的 OLTP 工作负载
  • 对写入延迟敏感的应用
  • 需要稳定响应时间(消除 VACUUM 影响)
  • 大表频繁更新导致膨胀的场景

使用方法

-- 创建使用 OrioleDB 存储的表
CREATE TABLE orders (
    id SERIAL PRIMARY KEY,
    customer_id INT,
    amount DECIMAL(10,2)
) USING orioledb;

-- 对现有表无法直接转换,需要重建

注意事项

  • OrioleDB 基于 PostgreSQL 17
  • 需要将 orioledb 添加到 shared_preload_libraries
  • 部分 PostgreSQL 特性可能不完全支持
  • 不支持 ARM64 架构

16 - supabase

使用 Pigsty 托管的 PostgreSQL 自建 Supabase 开源 Firebase 替代方案

supabase 配置模板提供了自建 Supabase 的参考配置,使用 Pigsty 托管的 PostgreSQL 作为底层存储。

更多细节,请参考 Supabase 自建教程


配置概览

  • 配置名称: supabase
  • 节点数量: 单节点
  • 配置说明:使用 Pigsty 托管的 PostgreSQL 自建 Supabase
  • 适用系统:el8, el9, el10, d12, d13, u22, u24
  • 适用架构:x86_64, aarch64
  • 相关配置:metarich

启用方式:

./configure -c supabase [-i <primary_ip>]

配置内容

源文件地址:pigsty/conf/supabase.yml

---
#==============================================================#
# File      :   supabase.yml
# Desc      :   Pigsty configuration for self-hosting supabase
# Ctime     :   2023-09-19
# Mtime     :   2025-12-28
# Docs      :   https://doc.pgsty.com/config
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# supabase is available on el8/el9/u22/u24/d12 with pg15,16,17,18
# tutorial: https://doc.pgsty.com/app/supabase
# Usage:
#   curl https://repo.pigsty.io/get | bash    # install pigsty
#   ./configure -c supabase   # use this supabase conf template
#   ./deploy.yml              # install pigsty & pgsql & minio
#   ./docker.yml              # install docker & docker compose
#   ./app.yml                 # launch supabase with docker compose

all:
  children:


    #----------------------------------------------#
    # INFRA : https://doc.pgsty.com/infra
    #----------------------------------------------#
    infra:
      hosts:
        10.10.10.10: { infra_seq: 1 }
      vars:
        repo_enabled: false    # disable local repo

    #----------------------------------------------#
    # ETCD : https://doc.pgsty.com/etcd
    #----------------------------------------------#
    etcd:
      hosts:
        10.10.10.10: { etcd_seq: 1 }
      vars:
        etcd_cluster: etcd
        etcd_safeguard: false  # enable to prevent purging running etcd instance

    #----------------------------------------------#
    # MINIO : https://doc.pgsty.com/minio
    #----------------------------------------------#
    minio:
      hosts:
        10.10.10.10: { minio_seq: 1 }
      vars:
        minio_cluster: minio
        minio_users:                      # list of minio user to be created
          - { access_key: pgbackrest  ,secret_key: S3User.Backup ,policy: pgsql }
          - { access_key: s3user_meta ,secret_key: S3User.Meta   ,policy: meta  }
          - { access_key: s3user_data ,secret_key: S3User.Data   ,policy: data  }

    #----------------------------------------------#
    # PostgreSQL cluster for Supabase self-hosting
    #----------------------------------------------#
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary }
      vars:
        pg_cluster: pg-meta
        pg_users:
          # supabase roles: anon, authenticated, dashboard_user
          - { name: anon           ,login: false }
          - { name: authenticated  ,login: false }
          - { name: dashboard_user ,login: false ,replication: true ,createdb: true ,createrole: true }
          - { name: service_role   ,login: false ,bypassrls: true }
          # supabase users: please use the same password
          - { name: supabase_admin             ,password: 'DBUser.Supa' ,pgbouncer: true ,inherit: true   ,roles: [ dbrole_admin ] ,superuser: true ,replication: true ,createdb: true ,createrole: true ,bypassrls: true }
          - { name: authenticator              ,password: 'DBUser.Supa' ,pgbouncer: true ,inherit: false  ,roles: [ dbrole_admin, authenticated ,anon ,service_role ] }
          - { name: supabase_auth_admin        ,password: 'DBUser.Supa' ,pgbouncer: true ,inherit: false  ,roles: [ dbrole_admin ] ,createrole: true }
          - { name: supabase_storage_admin     ,password: 'DBUser.Supa' ,pgbouncer: true ,inherit: false  ,roles: [ dbrole_admin, authenticated ,anon ,service_role ] ,createrole: true }
          - { name: supabase_functions_admin   ,password: 'DBUser.Supa' ,pgbouncer: true ,inherit: false  ,roles: [ dbrole_admin ] ,createrole: true }
          - { name: supabase_replication_admin ,password: 'DBUser.Supa' ,replication: true ,roles: [ dbrole_admin ]}
          - { name: supabase_etl_admin         ,password: 'DBUser.Supa' ,replication: true ,roles: [ pg_read_all_data ]}
          - { name: supabase_read_only_user    ,password: 'DBUser.Supa' ,bypassrls: true ,roles:   [ pg_read_all_data, dbrole_readonly ]}
        pg_databases:
          - name: postgres
            baseline: supabase.sql
            owner: supabase_admin
            comment: supabase postgres database
            schemas: [ extensions ,auth ,realtime ,storage ,graphql_public ,supabase_functions ,_analytics ,_realtime ]
            extensions:
              - { name: pgcrypto         ,schema: extensions } # cryptographic functions
              - { name: pg_net           ,schema: extensions } # async HTTP
              - { name: pgjwt            ,schema: extensions } # json web token API for postgres
              - { name: uuid-ossp        ,schema: extensions } # generate universally unique identifiers (UUIDs)
              - { name: pgsodium         ,schema: extensions } # pgsodium is a modern cryptography library for Postgres.
              - { name: supabase_vault   ,schema: extensions } # Supabase Vault Extension
              - { name: pg_graphql       ,schema: extensions } # pg_graphql: GraphQL support
              - { name: pg_jsonschema    ,schema: extensions } # pg_jsonschema: Validate json schema
              - { name: wrappers         ,schema: extensions } # wrappers: FDW collections
              - { name: http             ,schema: extensions } # http: allows web page retrieval inside the database.
              - { name: pg_cron          ,schema: extensions } # pg_cron: Job scheduler for PostgreSQL
              - { name: timescaledb      ,schema: extensions } # timescaledb: Enables scalable inserts and complex queries for time-series data
              - { name: pg_tle           ,schema: extensions } # pg_tle: Trusted Language Extensions for PostgreSQL
              - { name: vector           ,schema: extensions } # pgvector: the vector similarity search
              - { name: pgmq             ,schema: extensions } # pgmq: A lightweight message queue like AWS SQS and RSMQ
          - { name: supabase ,owner: supabase_admin ,comment: supabase analytics database ,schemas: [ extensions, _analytics ] }

        # supabase required extensions
        pg_libs: 'timescaledb, pgsodium, plpgsql, plpgsql_check, pg_cron, pg_net, pg_stat_statements, auto_explain, pg_wait_sampling, pg_tle, plan_filter'
        pg_extensions: [ pg18-main ,pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]
        pg_parameters: { cron.database_name: postgres }
        pg_hba_rules: # supabase hba rules, require access from docker network
          - { user: all ,db: postgres  ,addr: intra         ,auth: pwd ,title: 'allow supabase access from intranet'    }
          - { user: all ,db: postgres  ,addr: 172.17.0.0/16 ,auth: pwd ,title: 'allow access from local docker network' }
        node_crontab:
          - '00 01 * * * postgres /pg/bin/pg-backup full'  # make a full backup every 1am
          - '*  *  * * * postgres /pg/bin/supa-kick'       # kick supabase _analytics lag per minute: https://github.com/pgsty/pigsty/issues/581

    #----------------------------------------------#
    # Supabase
    #----------------------------------------------#
    # ./docker.yml
    # ./app.yml

    # the supabase stateless containers (default username & password: supabase/pigsty)
    supabase:
      hosts:
        10.10.10.10: {}
      vars:
        docker_enabled: true                              # enable docker on this group
        #docker_registry_mirrors: ["https://docker.1panel.live","https://docker.1ms.run","https://docker.xuanyuan.me","https://registry-1.docker.io"]
        app: supabase                                     # specify app name (supa) to be installed (in the apps)
        apps:                                             # define all applications
          supabase:                                       # the definition of supabase app
            conf:                                         # override /opt/supabase/.env

              # IMPORTANT: CHANGE JWT_SECRET AND REGENERATE CREDENTIAL ACCORDING!!!!!!!!!!!
              # https://supabase.com/docs/guides/self-hosting/docker#securing-your-services
              JWT_SECRET: your-super-secret-jwt-token-with-at-least-32-characters-long
              ANON_KEY: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJhbm9uIiwKICAgICJpc3MiOiAic3VwYWJhc2UtZGVtbyIsCiAgICAiaWF0IjogMTY0MTc2OTIwMCwKICAgICJleHAiOiAxNzk5NTM1NjAwCn0.dc_X5iR_VP_qT0zsiyj_I_OZ2T9FtRU2BBNWN8Bu4GE
              SERVICE_ROLE_KEY: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJzZXJ2aWNlX3JvbGUiLAogICAgImlzcyI6ICJzdXBhYmFzZS1kZW1vIiwKICAgICJpYXQiOiAxNjQxNzY5MjAwLAogICAgImV4cCI6IDE3OTk1MzU2MDAKfQ.DaYlNEoUrrEn2Ig7tqibS-PHK5vgusbcbo7X36XVt4Q
              PG_META_CRYPTO_KEY: your-encryption-key-32-chars-min

              DASHBOARD_USERNAME: supabase
              DASHBOARD_PASSWORD: pigsty

              # 32~64 random characters string for logflare
              LOGFLARE_PUBLIC_ACCESS_TOKEN: 1234567890abcdef1234567890abcdef
              LOGFLARE_PRIVATE_ACCESS_TOKEN: fedcba0987654321fedcba0987654321

              # postgres connection string (use the correct ip and port)
              POSTGRES_HOST: 10.10.10.10      # point to the local postgres node
              POSTGRES_PORT: 5436             # access via the 'default' service, which always route to the primary postgres
              POSTGRES_DB: postgres           # the supabase underlying database
              POSTGRES_PASSWORD: DBUser.Supa  # password for supabase_admin and multiple supabase users

              # expose supabase via domain name
              SITE_URL: https://supa.pigsty                # <------- Change This to your external domain name
              API_EXTERNAL_URL: https://supa.pigsty        # <------- Otherwise the storage api may not work!
              SUPABASE_PUBLIC_URL: https://supa.pigsty     # <------- DO NOT FORGET TO PUT IT IN infra_portal!

              # if using s3/minio as file storage
              S3_BUCKET: data
              S3_ENDPOINT: https://sss.pigsty:9000
              S3_ACCESS_KEY: s3user_data
              S3_SECRET_KEY: S3User.Data
              S3_FORCE_PATH_STYLE: true
              S3_PROTOCOL: https
              S3_REGION: stub
              MINIO_DOMAIN_IP: 10.10.10.10  # sss.pigsty domain name will resolve to this ip statically

              # if using SMTP (optional)
              #SMTP_ADMIN_EMAIL: [email protected]
              #SMTP_HOST: supabase-mail
              #SMTP_PORT: 2500
              #SMTP_USER: fake_mail_user
              #SMTP_PASS: fake_mail_password
              #SMTP_SENDER_NAME: fake_sender
              #ENABLE_ANONYMOUS_USERS: false


  #==============================================================#
  # Global Parameters
  #==============================================================#
  vars:

    #----------------------------------------------#
    # INFRA : https://doc.pgsty.com/infra
    #----------------------------------------------#
    version: v4.0.0                       # pigsty version string
    admin_ip: 10.10.10.10                 # admin node ip address
    region: default                       # upstream mirror region: default|china|europe
    proxy_env:                            # global proxy env when downloading packages
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.myqcloud.com,*.tsinghua.edu.cn"
      # http_proxy:  # set your proxy here: e.g http://user:[email protected]
      # https_proxy: # set your proxy here: e.g http://user:[email protected]
      # all_proxy:   # set your proxy here: e.g http://user:[email protected]
    certbot_sign: false                   # enable certbot to sign https certificate for infra portal
    certbot_email: [email protected]         # replace your email address to receive expiration notice
    infra_portal:                         # infra services exposed via portal
      home      : { domain: i.pigsty }    # default domain name
      pgadmin   : { domain: adm.pigsty ,endpoint: "${admin_ip}:8885" }
      bytebase  : { domain: ddl.pigsty ,endpoint: "${admin_ip}:8887" }
      #minio     : { domain: m.pigsty   ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }

      # Nginx / Domain / HTTPS : https://doc.pgsty.com/admin/portal
      supa :                              # nginx server config for supabase
        domain: supa.pigsty               # REPLACE IT WITH YOUR OWN DOMAIN!
        endpoint: "10.10.10.10:8000"      # supabase service endpoint: IP:PORT
        websocket: true                   # add websocket support
        certbot: supa.pigsty              # certbot cert name, apply with `make cert`

    #----------------------------------------------#
    # NODE : https://doc.pgsty.com/node/param
    #----------------------------------------------#
    nodename_overwrite: false             # do not overwrite node hostname on single node mode
    node_tune: oltp                       # node tuning specs: oltp,olap,tiny,crit
    node_etc_hosts:                       # add static domains to all nodes /etc/hosts
      - 10.10.10.10 i.pigsty sss.pigsty supa.pigsty
    node_repo_modules: node,pgsql,infra   # use pre-made local repo rather than install from upstream
    node_repo_remove: true                # remove existing node repo for node managed by pigsty
    #node_packages: [openssh-server]      # packages to be installed current nodes with latest version
    #node_timezone: Asia/Hong_Kong        # overwrite node timezone

    #----------------------------------------------#
    # PGSQL : https://doc.pgsty.com/pgsql/param
    #----------------------------------------------#
    pg_version: 18                        # default postgres version
    pg_conf: oltp.yml                     # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    pg_safeguard: false                   # prevent purging running postgres instance?
    pg_default_schemas: [ monitor, extensions ] # add new schema: exxtensions
    pg_default_extensions:                # default extensions to be created
      - { name: pg_stat_statements ,schema: monitor     }
      - { name: pgstattuple        ,schema: monitor     }
      - { name: pg_buffercache     ,schema: monitor     }
      - { name: pageinspect        ,schema: monitor     }
      - { name: pg_prewarm         ,schema: monitor     }
      - { name: pg_visibility      ,schema: monitor     }
      - { name: pg_freespacemap    ,schema: monitor     }
      - { name: pg_wait_sampling   ,schema: monitor     }
      # move default extensions to `extensions` schema for supabase
      - { name: postgres_fdw       ,schema: extensions  }
      - { name: file_fdw           ,schema: extensions  }
      - { name: btree_gist         ,schema: extensions  }
      - { name: btree_gin          ,schema: extensions  }
      - { name: pg_trgm            ,schema: extensions  }
      - { name: intagg             ,schema: extensions  }
      - { name: intarray           ,schema: extensions  }
      - { name: pg_repack          ,schema: extensions  }

    #----------------------------------------------#
    # BACKUP : https://doc.pgsty.com/pgsql/backup
    #----------------------------------------------#
    minio_endpoint: https://sss.pigsty:9000 # explicit overwrite minio endpoint with haproxy port
    pgbackrest_method: minio              # pgbackrest repo method: local,minio,[user-defined...]
    pgbackrest_repo:                      # pgbackrest repo: https://pgbackrest.org/configuration.html#section-repository
      local:                              # default pgbackrest repo with local posix fs
        path: /pg/backup                  # local backup directory, `/pg/backup` by default
        retention_full_type: count        # retention full backups by count
        retention_full: 2                 # keep 2, at most 3 full backups when using local fs repo
      minio:                              # optional minio repo for pgbackrest
        type: s3                          # minio is s3-compatible, so s3 is used
        s3_endpoint: sss.pigsty           # minio endpoint domain name, `sss.pigsty` by default
        s3_region: us-east-1              # minio region, us-east-1 by default, useless for minio
        s3_bucket: pgsql                  # minio bucket name, `pgsql` by default
        s3_key: pgbackrest                # minio user access key for pgbackrest
        s3_key_secret: S3User.Backup      # minio user secret key for pgbackrest <------------------ HEY, DID YOU CHANGE THIS?
        s3_uri_style: path                # use path style uri for minio rather than host style
        path: /pgbackrest                 # minio backup path, default is `/pgbackrest`
        storage_port: 9000                # minio port, 9000 by default
        storage_ca_file: /etc/pki/ca.crt  # minio ca file path, `/etc/pki/ca.crt` by default
        block: y                          # Enable block incremental backup
        bundle: y                         # bundle small files into a single file
        bundle_limit: 20MiB               # Limit for file bundles, 20MiB for object storage
        bundle_size: 128MiB               # Target size for file bundles, 128MiB for object storage
        cipher_type: aes-256-cbc          # enable AES encryption for remote backup repo
        cipher_pass: pgBackRest           # AES encryption password, default is 'pgBackRest'  <----- HEY, DID YOU CHANGE THIS?
        retention_full_type: time         # retention full backup by time on minio repo
        retention_full: 14                # keep full backup for the last 14 days
      s3:                                 # you can use cloud object storage as backup repo
        type: s3                          # Add your object storage credentials here!
        s3_endpoint: oss-cn-beijing-internal.aliyuncs.com
        s3_region: oss-cn-beijing
        s3_bucket: <your_bucket_name>
        s3_key: <your_access_key>
        s3_key_secret: <your_secret_key>
        s3_uri_style: host
        path: /pgbackrest
        bundle: y                         # bundle small files into a single file
        bundle_limit: 20MiB               # Limit for file bundles, 20MiB for object storage
        bundle_size: 128MiB               # Target size for file bundles, 128MiB for object storage
        cipher_type: aes-256-cbc          # enable AES encryption for remote backup repo
        cipher_pass: pgBackRest           # AES encryption password, default is 'pgBackRest'
        retention_full_type: time         # retention full backup by time on minio repo
        retention_full: 14                # keep full backup for the last 14 days

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

安装过程


配置解读

supabase 模板提供了完整的 Supabase 自建方案,让您可以在自己的基础设施上运行这个开源 Firebase 替代品。

架构组成

  • PostgreSQL:Pigsty 托管的生产级 PostgreSQL(支持高可用)
  • Docker 容器:Supabase 无状态服务(Auth、Storage、Realtime、Edge Functions 等)
  • MinIO:S3 兼容的对象存储,用于文件存储和 PostgreSQL 备份
  • Nginx:反向代理和 HTTPS 终止

关键特性

  • 使用 Pigsty 管理的 PostgreSQL 替代 Supabase 自带的数据库容器
  • 支持 PostgreSQL 高可用(可扩展为三节点集群)
  • 安装全部 Supabase 所需扩展(pg_net、pgjwt、pg_graphql、vector 等)
  • 集成 MinIO 对象存储用于文件上传和备份
  • 支持 HTTPS 和 Let’s Encrypt 自动证书

部署步骤

curl https://repo.pigsty.io/get | bash   # 下载 Pigsty
./configure -c supabase                   # 使用 supabase 配置模板
./deploy.yml                              # 安装 Pigsty、PostgreSQL、MinIO
./docker.yml                              # 安装 Docker
./app.yml                                 # 启动 Supabase 容器

访问方式

# Supabase Studio
https://supa.pigsty   (用户名: supabase, 密码: pigsty)

# 直接连接 PostgreSQL
psql postgres://supabase_admin:[email protected]:5432/postgres

适用场景

  • 需要自建 BaaS (Backend as a Service) 平台
  • 希望完全掌控数据和基础设施
  • 需要企业级 PostgreSQL 高可用和备份
  • 对 Supabase 云服务有合规或成本考虑

注意事项

  • 必须修改 JWT_SECRET:使用至少 32 字符的随机字符串,并重新生成 ANON_KEY 和 SERVICE_ROLE_KEY
  • 需要配置正确的域名(SITE_URLAPI_EXTERNAL_URL
  • 生产环境建议启用 HTTPS(可使用 certbot 自动签发证书)
  • Docker 网络需要能访问 PostgreSQL(已配置 172.17.0.0/16 HBA 规则)

17 - 高可用模板

18 - ha/simu

20 节点生产环境仿真配置,用于大规模部署测试

ha/simu 配置模板是一个 20 节点的生产环境仿真配置,需要强大的宿主机方可运行。


配置概览

  • 配置名称: ha/simu
  • 节点数量: 20 节点,pigsty/vagrant/spec/simu.rb
  • 配置说明:20 节点的生产环境仿真配置,需要强大的宿主机方可运行。
  • 适用系统:el8, el9, el10, d12, d13, u22, u24
  • 适用架构:x86_64, aarch64

启用方式:

./configure -c ha/simu [-i <primary_ip>]

配置内容

源文件地址:pigsty/conf/ha/simu.yml

---
#==============================================================#
# File      :   simu.yml
# Desc      :   Pigsty Simubox: a 20 node prod simulation env
# Ctime     :   2023-07-20
# Mtime     :   2025-12-23
# Docs      :   https://doc.pgsty.com/config
# License   :   AGPLv3 @ https://doc.pgsty.com/about/license
# Copyright :   2018-2025  Ruohang Feng / Vonng ([email protected])
#==============================================================#

all:

  children:

    #==========================================================#
    # infra: 3 nodes
    #==========================================================#
    # ./infra.yml -l infra
    # ./docker.yml -l infra (optional)
    infra:
      hosts:
        10.10.10.10: {}
        10.10.10.11: { repo_enabled: false }
        10.10.10.12: { repo_enabled: false }
      vars:
        docker_enabled: true
        node_conf: oltp         # use oltp template for infra nodes
        pg_conf: oltp.yml       # use oltp template for infra pgsql
        pg_exporters:           # bin/pgmon-add pg-meta2/pg-src2/pg-dst2
          20001: {pg_cluster: pg-meta2   ,pg_seq: 1 ,pg_host: 10.10.10.10, pg_databases: [{ name: meta }]}
          20002: {pg_cluster: pg-meta2   ,pg_seq: 2 ,pg_host: 10.10.10.11, pg_databases: [{ name: meta }]}
          20003: {pg_cluster: pg-meta2   ,pg_seq: 3 ,pg_host: 10.10.10.12, pg_databases: [{ name: meta }]}

          20004: {pg_cluster: pg-src2    ,pg_seq: 1 ,pg_host: 10.10.10.31, pg_databases: [{ name: src }]}
          20005: {pg_cluster: pg-src2    ,pg_seq: 2 ,pg_host: 10.10.10.32, pg_databases: [{ name: src }]}
          20006: {pg_cluster: pg-src2    ,pg_seq: 3 ,pg_host: 10.10.10.33, pg_databases: [{ name: src }]}

          20007: {pg_cluster: pg-dst2    ,pg_seq: 1 ,pg_host: 10.10.10.41, pg_databases: [{ name: dst }]}
          20008: {pg_cluster: pg-dst2    ,pg_seq: 2 ,pg_host: 10.10.10.42, pg_databases: [{ name: dst }]}
          20009: {pg_cluster: pg-dst2    ,pg_seq: 3 ,pg_host: 10.10.10.43, pg_databases: [{ name: dst }]}


    #==========================================================#
    # nodes: 23 nodes
    #==========================================================#
    # ./node.yml
    nodes:
      hosts:
        10.10.10.10 : { nodename: meta1  ,node_cluster: meta   ,pg_cluster: pg-meta  ,pg_seq: 1 ,pg_role: primary, infra_seq: 1 }
        10.10.10.11 : { nodename: meta2  ,node_cluster: meta   ,pg_cluster: pg-meta  ,pg_seq: 2 ,pg_role: replica, infra_seq: 2 }
        10.10.10.12 : { nodename: meta3  ,node_cluster: meta   ,pg_cluster: pg-meta  ,pg_seq: 3 ,pg_role: replica, infra_seq: 3 }
        10.10.10.18 : { nodename: proxy1 ,node_cluster: proxy  ,vip_address: 10.10.10.20 ,vip_vrid: 20 ,vip_interface: eth1 ,vip_role: master }
        10.10.10.19 : { nodename: proxy2 ,node_cluster: proxy  ,vip_address: 10.10.10.20 ,vip_vrid: 20 ,vip_interface: eth1 ,vip_role: backup }
        10.10.10.21 : { nodename: minio1 ,node_cluster: minio  ,minio_cluster: minio ,minio_seq: 1 }
        10.10.10.22 : { nodename: minio2 ,node_cluster: minio  ,minio_cluster: minio ,minio_seq: 2 }
        10.10.10.23 : { nodename: minio3 ,node_cluster: minio  ,minio_cluster: minio ,minio_seq: 3 }
        10.10.10.24 : { nodename: minio4 ,node_cluster: minio  ,minio_cluster: minio ,minio_seq: 4 }
        10.10.10.25 : { nodename: etcd1  ,node_cluster: etcd   ,etcd_cluster: etcd ,etcd_seq: 1 }
        10.10.10.26 : { nodename: etcd2  ,node_cluster: etcd   ,etcd_cluster: etcd ,etcd_seq: 2 }
        10.10.10.27 : { nodename: etcd3  ,node_cluster: etcd   ,etcd_cluster: etcd ,etcd_seq: 3 }
        10.10.10.28 : { nodename: etcd4  ,node_cluster: etcd   ,etcd_cluster: etcd ,etcd_seq: 4 }
        10.10.10.29 : { nodename: etcd5  ,node_cluster: etcd   ,etcd_cluster: etcd ,etcd_seq: 5 }
        10.10.10.31 : { nodename: pg-src-1 ,node_cluster: pg-src ,node_id_from_pg: true }
        10.10.10.32 : { nodename: pg-src-2 ,node_cluster: pg-src ,node_id_from_pg: true }
        10.10.10.33 : { nodename: pg-src-3 ,node_cluster: pg-src ,node_id_from_pg: true }
        10.10.10.41 : { nodename: pg-dst-1 ,node_cluster: pg-dst ,node_id_from_pg: true }
        10.10.10.42 : { nodename: pg-dst-2 ,node_cluster: pg-dst ,node_id_from_pg: true }
        10.10.10.43 : { nodename: pg-dst-3 ,node_cluster: pg-dst ,node_id_from_pg: true }

    #==========================================================#
    # etcd: 5 nodes dedicated etcd cluster
    #==========================================================#
    # ./etcd.yml -l etcd;
    etcd:
      hosts:
        10.10.10.25: {}
        10.10.10.26: {}
        10.10.10.27: {}
        10.10.10.28: {}
        10.10.10.29: {}
      vars: {}

    #==========================================================#
    # minio: 4 nodes dedicated minio cluster
    #==========================================================#
    # ./minio.yml -l minio;
    minio:
      hosts:
        10.10.10.21: {}
        10.10.10.22: {}
        10.10.10.23: {}
        10.10.10.24: {}
      vars:
        minio_data: '/data{1...4}' # 4 node x 4 disk
        minio_users:                      # list of minio user to be created
          - { access_key: pgbackrest  ,secret_key: S3User.Backup ,policy: pgsql }
          - { access_key: s3user_meta ,secret_key: S3User.Meta   ,policy: meta  }
          - { access_key: s3user_data ,secret_key: S3User.Data   ,policy: data  }


    #==========================================================#
    # proxy: 2 nodes used as dedicated haproxy server
    #==========================================================#
    # ./node.yml -l proxy
    proxy:
      hosts:
        10.10.10.18: {}
        10.10.10.19: {}
      vars:
        vip_enabled: true
        haproxy_services:      # expose minio service : sss.pigsty:9000
          - name: minio        # [REQUIRED] service name, unique
            port: 9000         # [REQUIRED] service port, unique
            balance: leastconn # Use leastconn algorithm and minio health check
            options: [ "option httpchk", "option http-keep-alive", "http-check send meth OPTIONS uri /minio/health/live", "http-check expect status 200" ]
            servers:           # reload service with ./node.yml -t haproxy_config,haproxy_reload
              - { name: minio-1 ,ip: 10.10.10.21 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
              - { name: minio-2 ,ip: 10.10.10.22 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
              - { name: minio-3 ,ip: 10.10.10.23 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
              - { name: minio-4 ,ip: 10.10.10.24 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }

    #==========================================================#
    # pg-meta: reuse infra node as meta cmdb
    #==========================================================#
    # ./pgsql.yml -l pg-meta
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1 , pg_role: primary }
        10.10.10.11: { pg_seq: 2 , pg_role: replica }
        10.10.10.12: { pg_seq: 3 , pg_role: replica }
      vars:
        pg_cluster: pg-meta
        pg_vip_enabled: true
        pg_vip_address: 10.10.10.2/24
        pg_vip_interface: eth1
        pg_users:
          - {name: dbuser_meta     ,password: DBUser.Meta     ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: pigsty admin user }
          - {name: dbuser_view     ,password: DBUser.Viewer   ,pgbouncer: true ,roles: [dbrole_readonly] ,comment: read-only viewer for meta database }
          - {name: dbuser_grafana  ,password: DBUser.Grafana  ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for grafana database    }
          - {name: dbuser_bytebase ,password: DBUser.Bytebase ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for bytebase database   }
          - {name: dbuser_kong     ,password: DBUser.Kong     ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for kong api gateway    }
          - {name: dbuser_gitea    ,password: DBUser.Gitea    ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for gitea service       }
          - {name: dbuser_wiki     ,password: DBUser.Wiki     ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for wiki.js service     }
          - {name: dbuser_noco     ,password: DBUser.Noco     ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for nocodb service      }
        pg_databases:
          - { name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [pigsty] ,extensions: [{name: vector}]}
          - { name: grafana  ,owner: dbuser_grafana  ,revokeconn: true ,comment: grafana primary database }
          - { name: bytebase ,owner: dbuser_bytebase ,revokeconn: true ,comment: bytebase primary database }
          - { name: kong     ,owner: dbuser_kong     ,revokeconn: true ,comment: kong the api gateway database }
          - { name: gitea    ,owner: dbuser_gitea    ,revokeconn: true ,comment: gitea meta database }
          - { name: wiki     ,owner: dbuser_wiki     ,revokeconn: true ,comment: wiki meta database }
          - { name: noco     ,owner: dbuser_noco     ,revokeconn: true ,comment: nocodb database }
        pg_hba_rules:
          - { user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes' }
        pg_libs: 'pg_stat_statements, auto_explain' # add timescaledb to shared_preload_libraries
        node_crontab:  # make a full backup on monday 1am, and an incremental backup during weekdays
          - '00 01 * * 1 postgres /pg/bin/pg-backup full'
          - '00 01 * * 2,3,4,5,6,7 postgres /pg/bin/pg-backup'

    #==========================================================#
    # pg-src: dedicate 3 node source cluster
    #==========================================================#
    # ./pgsql.yml -l pg-src
    pg-src:
      hosts:
        10.10.10.31: { pg_seq: 1 ,pg_role: primary }
        10.10.10.32: { pg_seq: 2 ,pg_role: replica }
        10.10.10.33: { pg_seq: 3 ,pg_role: replica }
      vars:
        pg_cluster: pg-src
        pg_vip_enabled: true
        pg_vip_address: 10.10.10.3/24
        pg_vip_interface: eth1
        pg_users:  [{ name: test , password: test , pgbouncer: true , roles: [ dbrole_admin ] }]
        pg_databases: [{ name: src }]


    #==========================================================#
    # pg-dst: dedicate 3 node destination cluster
    #==========================================================#
    # ./pgsql.yml -l pg-dst
    pg-dst:
      hosts:
        10.10.10.41: { pg_seq: 1 ,pg_role: primary }
        10.10.10.42: { pg_seq: 2 ,pg_role: replica }
        10.10.10.43: { pg_seq: 3 ,pg_role: replica }
      vars:
        pg_cluster: pg-dst
        pg_vip_enabled: true
        pg_vip_address: 10.10.10.4/24
        pg_vip_interface: eth1
        pg_users: [ { name: test , password: test , pgbouncer: true , roles: [ dbrole_admin ] } ]
        pg_databases: [ { name: dst } ]


    #==========================================================#
    # redis-meta: reuse the 5 etcd nodes as redis sentinel
    #==========================================================#
    # ./redis.yml -l redis-meta
    redis-meta:
      hosts:
        10.10.10.25: { redis_node: 1 , redis_instances: { 26379: {} } }
        10.10.10.26: { redis_node: 2 , redis_instances: { 26379: {} } }
        10.10.10.27: { redis_node: 3 , redis_instances: { 26379: {} } }
        10.10.10.28: { redis_node: 4 , redis_instances: { 26379: {} } }
        10.10.10.29: { redis_node: 5 , redis_instances: { 26379: {} } }
      vars:
        redis_cluster: redis-meta
        redis_password: 'redis.meta'
        redis_mode: sentinel
        redis_max_memory: 256MB
        redis_sentinel_monitor:  # primary list for redis sentinel, use cls as name, primary ip:port
          - { name: redis-src, host: 10.10.10.31, port: 6379 ,password: redis.src, quorum: 1 }
          - { name: redis-dst, host: 10.10.10.41, port: 6379 ,password: redis.dst, quorum: 1 }

    #==========================================================#
    # redis-src: reuse pg-src 3 nodes for redis
    #==========================================================#
    # ./redis.yml -l redis-src
    redis-src:
      hosts:
        10.10.10.31: { redis_node: 1 , redis_instances: {6379: {  } }}
        10.10.10.32: { redis_node: 2 , redis_instances: {6379: { replica_of: '10.10.10.31 6379' }, 6380: { replica_of: '10.10.10.32 6379' } }}
        10.10.10.33: { redis_node: 3 , redis_instances: {6379: { replica_of: '10.10.10.31 6379' }, 6380: { replica_of: '10.10.10.33 6379' } }}
      vars:
        redis_cluster: redis-src
        redis_password: 'redis.src'
        redis_max_memory: 64MB

    #==========================================================#
    # redis-dst: reuse pg-dst 3 nodes for redis
    #==========================================================#
    # ./redis.yml -l redis-dst
    redis-dst:
      hosts:
        10.10.10.41: { redis_node: 1 , redis_instances: {6379: {  }                               }}
        10.10.10.42: { redis_node: 2 , redis_instances: {6379: { replica_of: '10.10.10.41 6379' } }}
        10.10.10.43: { redis_node: 3 , redis_instances: {6379: { replica_of: '10.10.10.41 6379' } }}
      vars:
        redis_cluster: redis-dst
        redis_password: 'redis.dst'
        redis_max_memory: 64MB

    #==========================================================#
    # pg-tmp: reuse proxy nodes as pgsql cluster
    #==========================================================#
    # ./pgsql.yml -l pg-tmp
    pg-tmp:
      hosts:
        10.10.10.18: { pg_seq: 1 ,pg_role: primary }
        10.10.10.19: { pg_seq: 2 ,pg_role: replica }
      vars:
        pg_cluster: pg-tmp
        pg_users: [ { name: test , password: test , pgbouncer: true , roles: [ dbrole_admin ] } ]
        pg_databases: [ { name: tmp } ]

    #==========================================================#
    # pg-etcd: reuse etcd nodes as pgsql cluster
    #==========================================================#
    # ./pgsql.yml -l pg-etcd
    pg-etcd:
      hosts:
        10.10.10.25: { pg_seq: 1 ,pg_role: primary }
        10.10.10.26: { pg_seq: 2 ,pg_role: replica }
        10.10.10.27: { pg_seq: 3 ,pg_role: replica }
        10.10.10.28: { pg_seq: 4 ,pg_role: replica }
        10.10.10.29: { pg_seq: 5 ,pg_role: offline }
      vars:
        pg_cluster: pg-etcd
        pg_users: [ { name: test , password: test , pgbouncer: true , roles: [ dbrole_admin ] } ]
        pg_databases: [ { name: etcd } ]

    #==========================================================#
    # pg-minio: reuse minio nodes as pgsql cluster
    #==========================================================#
    # ./pgsql.yml -l pg-minio
    pg-minio:
      hosts:
        10.10.10.21: { pg_seq: 1 ,pg_role: primary }
        10.10.10.22: { pg_seq: 2 ,pg_role: replica }
        10.10.10.23: { pg_seq: 3 ,pg_role: replica }
        10.10.10.24: { pg_seq: 4 ,pg_role: replica }
      vars:
        pg_cluster: pg-minio
        pg_users: [ { name: test , password: test , pgbouncer: true , roles: [ dbrole_admin ] } ]
        pg_databases: [ { name: minio } ]

    #==========================================================#
    # ferret: reuse pg-src as mongo (ferretdb)
    #==========================================================#
    # ./mongo.yml -l ferret
    ferret:
      hosts:
        10.10.10.31: { mongo_seq: 1 }
        10.10.10.32: { mongo_seq: 2 }
        10.10.10.33: { mongo_seq: 3 }
      vars:
        mongo_cluster: ferret
        mongo_pgurl: 'postgres://test:[email protected]:5432/src'


  #============================================================#
  # Global Variables
  #============================================================#
  vars:

    #==========================================================#
    # INFRA
    #==========================================================#
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: china                     # upstream mirror region: default|china|europe
    infra_portal:                     # infra services exposed via portal
      home         : { domain: i.pigsty }     # default domain name
      minio        : { domain: m.pigsty    ,endpoint: "10.10.10.21:9001" ,scheme: https ,websocket: true }
      postgrest    : { domain: api.pigsty  ,endpoint: "127.0.0.1:8884" }
      pgadmin      : { domain: adm.pigsty  ,endpoint: "127.0.0.1:8885" }
      pgweb        : { domain: cli.pigsty  ,endpoint: "127.0.0.1:8886" }
      bytebase     : { domain: ddl.pigsty  ,endpoint: "127.0.0.1:8887" }
      jupyter      : { domain: lab.pigsty  ,endpoint: "127.0.0.1:8888"  , websocket: true }
      supa         : { domain: supa.pigsty ,endpoint: "10.10.10.10:8000", websocket: true }

    #==========================================================#
    # NODE
    #==========================================================#
    node_id_from_pg: false            # use nodename rather than pg identity as hostname
    node_conf: tiny                   # use small node template
    node_timezone: Asia/Hong_Kong     # use Asia/Hong_Kong Timezone
    node_dns_servers:                 # DNS servers in /etc/resolv.conf
      - 10.10.10.10
      - 10.10.10.11
    node_etc_hosts:
      - 10.10.10.10 i.pigsty
      - 10.10.10.20 sss.pigsty        # point minio service domain to the L2 VIP of proxy cluster
    node_ntp_servers:                 # NTP servers in /etc/chrony.conf
      - pool cn.pool.ntp.org iburst
      - pool 10.10.10.10 iburst
    node_admin_ssh_exchange: false    # exchange admin ssh key among node cluster

    #==========================================================#
    # PGSQL
    #==========================================================#
    pg_conf: tiny.yml
    pgbackrest_method: minio          # USE THE HA MINIO THROUGH A LOAD BALANCER
    pg_dbsu_ssh_exchange: false       # do not exchange dbsu ssh key among pgsql cluster
    pgbackrest_repo:                  # pgbackrest repo: https://pgbackrest.org/configuration.html#section-repository
      local:                          # default pgbackrest repo with local posix fs
        path: /pg/backup              # local backup directory, `/pg/backup` by default
        retention_full_type: count    # retention full backups by count
        retention_full: 2             # keep 2, at most 3 full backup when using local fs repo
      minio:                          # optional minio repo for pgbackrest
        type: s3                      # minio is s3-compatible, so s3 is used
        s3_endpoint: sss.pigsty       # minio endpoint domain name, `sss.pigsty` by default
        s3_region: us-east-1          # minio region, us-east-1 by default, useless for minio
        s3_bucket: pgsql              # minio bucket name, `pgsql` by default
        s3_key: pgbackrest            # minio user access key for pgbackrest
        s3_key_secret: S3User.Backup  # minio user secret key for pgbackrest
        s3_uri_style: path            # use path style uri for minio rather than host style
        path: /pgbackrest             # minio backup path, default is `//pgbackrest`
        storage_port: 9000            # minio port, 9000 by default
        storage_ca_file: /etc/pki/ca.crt  # minio ca file path, `/etc/pki/ca.crt` by default
        block: y                      # Enable block incremental backup
        bundle: y                     # bundle small files into a single file
        bundle_limit: 20MiB           # Limit for file bundles, 20MiB for object storage
        bundle_size: 128MiB           # Target size for file bundles, 128MiB for object storage
        cipher_type: aes-256-cbc      # enable AES encryption for remote backup repo
        cipher_pass: pgBackRest       # AES encryption password, default is 'pgBackRest'
        retention_full_type: time     # retention full backup by time on minio repo
        retention_full: 14            # keep full backup for last 14 days


    #==========================================================#
    # Repo
    #==========================================================#
    repo_packages: [
      node-bootstrap, infra-package, infra-addons, node-package1, node-package2, pgsql-utility, extra-modules,
      pg18-core ,pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl
    ]

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

配置解读

ha/simu 模板是一个 大规模生产环境仿真配置,用于测试和验证复杂场景。

架构组成

  • 2 节点高可用 INFRA(监控/告警/Nginx/DNS)
  • 5 节点高可用 ETCD 和 MinIO(多磁盘)
  • 2 节点 Proxy(HAProxy + Keepalived VIP)
  • 多套 PostgreSQL 集群:
    • pg-meta:2 节点高可用
    • pg-v12~v17:单节点多版本测试
    • pg-pitr:单节点 PITR 测试
    • pg-test:4 节点高可用
    • pg-src/pg-dst:3+2 节点复制测试
    • pg-citus:10 节点分布式集群
  • 多种 Redis 模式:主从、哨兵、集群

适用场景

  • 大规模部署测试与验证
  • 高可用故障演练
  • 性能基准测试
  • 新功能预览与评估

注意事项

  • 需要强大的宿主机(推荐 64GB+ 内存)
  • 使用 Vagrant 虚拟机模拟

19 - ha/full

四节点完整功能演示环境,带有两套 PostgreSQL 集群、MinIO、Redis 等组件示例

ha/full 配置模板是 Pigsty 推荐的沙箱演示环境,使用四个节点部署两套 PostgreSQL 集群,用于测试和演示 Pigsty 各方面的能力。

Pigsty 大部分教程和示例都基于此模板的沙箱环境。


配置概览

  • 配置名称: ha/full
  • 节点数量: 四节点
  • 配置说明:四节点完整功能演示环境,带有两套 PostgreSQL 集群、MinIO、Redis 等组件示例
  • 适用系统:el8, el9, el10, d12, d13, u22, u24
  • 适用架构:x86_64, aarch64
  • 相关配置:ha/trioha/safedemo/demo

启用方式:

./configure -c ha/full [-i <primary_ip>]

配置生成后,需要修改其他三个节点的 IP 地址。


配置内容

源文件地址:pigsty/conf/ha/full.yml

---
#==============================================================#
# File      :   full.yml
# Desc      :   Pigsty Local Sandbox 4-node Demo Config
# Ctime     :   2020-05-22
# Mtime     :   2025-12-12
# Docs      :   https://doc.pgsty.com/config
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#


all:

  #==============================================================#
  # Clusters, Nodes, and Modules
  #==============================================================#
  children:

    # infra: monitor, alert, repo, etc..
    infra:
      hosts:
        10.10.10.10: { infra_seq: 1 }
      vars:
        docker_enabled: true      # enabled docker with ./docker.yml
        #docker_registry_mirrors: ["https://docker.1panel.live","https://docker.1ms.run","https://docker.xuanyuan.me","https://registry-1.docker.io"]

    # etcd cluster for HA postgres DCS
    etcd:
      hosts:
        10.10.10.10: { etcd_seq: 1 }
      vars:
        etcd_cluster: etcd

    # minio (single node, used as backup repo)
    minio:
      hosts:
        10.10.10.10: { minio_seq: 1 }
      vars:
        minio_cluster: minio
        minio_users:                      # list of minio user to be created
          - { access_key: pgbackrest  ,secret_key: S3User.Backup ,policy: pgsql }
          - { access_key: s3user_meta ,secret_key: S3User.Meta   ,policy: meta  }
          - { access_key: s3user_data ,secret_key: S3User.Data   ,policy: data  }

    # postgres cluster: pg-meta
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - { name: dbuser_meta ,password: DBUser.Meta     ,pgbouncer: true ,roles: [ dbrole_admin ]    ,comment: pigsty admin user }
          - { name: dbuser_view ,password: DBUser.Viewer   ,pgbouncer: true ,roles: [ dbrole_readonly ] ,comment: read-only viewer for meta database }
        pg_databases:
          - { name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [ pigsty ] }
        pg_hba_rules:
          - { user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes' }
        pg_vip_enabled: true
        pg_vip_address: 10.10.10.2/24
        pg_vip_interface: eth1
        node_crontab:  # make a full backup 1 am everyday
          - '00 01 * * * postgres /pg/bin/pg-backup full'

    # pgsql 3 node ha cluster: pg-test
    pg-test:
      hosts:
        10.10.10.11: { pg_seq: 1, pg_role: primary }   # primary instance, leader of cluster
        10.10.10.12: { pg_seq: 2, pg_role: replica }   # replica instance, follower of leader
        10.10.10.13: { pg_seq: 3, pg_role: replica, pg_offline_query: true } # replica with offline access
      vars:
        pg_cluster: pg-test           # define pgsql cluster name
        pg_users:  [{ name: test , password: test , pgbouncer: true , roles: [ dbrole_admin ] }]
        pg_databases: [{ name: test }]
        pg_vip_enabled: true
        pg_vip_address: 10.10.10.3/24
        pg_vip_interface: eth1
        node_crontab:  # make a full backup on monday 1am, and an incremental backup during weekdays
          - '00 01 * * 1 postgres /pg/bin/pg-backup full'
          - '00 01 * * 2,3,4,5,6,7 postgres /pg/bin/pg-backup'

    #----------------------------------#
    # redis ms, sentinel, native cluster
    #----------------------------------#
    redis-ms: # redis classic primary & replica
      hosts: { 10.10.10.10: { redis_node: 1 , redis_instances: { 6379: { }, 6380: { replica_of: '10.10.10.10 6379' } } } }
      vars: { redis_cluster: redis-ms ,redis_password: 'redis.ms' ,redis_max_memory: 64MB }

    redis-meta: # redis sentinel x 3
      hosts: { 10.10.10.11: { redis_node: 1 , redis_instances: { 26379: { } ,26380: { } ,26381: { } } } }
      vars:
        redis_cluster: redis-meta
        redis_password: 'redis.meta'
        redis_mode: sentinel
        redis_max_memory: 16MB
        redis_sentinel_monitor: # primary list for redis sentinel, use cls as name, primary ip:port
          - { name: redis-ms, host: 10.10.10.10, port: 6379 ,password: redis.ms, quorum: 2 }

    redis-test: # redis native cluster: 3m x 3s
      hosts:
        10.10.10.12: { redis_node: 1 ,redis_instances: { 6379: { } ,6380: { } ,6381: { } } }
        10.10.10.13: { redis_node: 2 ,redis_instances: { 6379: { } ,6380: { } ,6381: { } } }
      vars: { redis_cluster: redis-test ,redis_password: 'redis.test' ,redis_mode: cluster, redis_max_memory: 32MB }


  #==============================================================#
  # Global Parameters
  #==============================================================#
  vars:
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default|china|europe
    node_tune: oltp                   # node tuning specs: oltp,olap,tiny,crit
    pg_conf: oltp.yml                 # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    proxy_env:                        # global proxy env when downloading packages
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.myqcloud.com,*.tsinghua.edu.cn"
      # http_proxy:  # set your proxy here: e.g http://user:[email protected]
      # https_proxy: # set your proxy here: e.g http://user:[email protected]
      # all_proxy:   # set your proxy here: e.g http://user:[email protected]
    infra_portal:                     # infra services exposed via portal
      home : { domain: i.pigsty }     # default domain name
      #minio : { domain: m.pigsty ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }

    #----------------------------------#
    # MinIO Related Options
    #----------------------------------#
    node_etc_hosts: [ '${admin_ip} i.pigsty sss.pigsty' ]
    pgbackrest_method: minio          # if you want to use minio as backup repo instead of 'local' fs, uncomment this
    pgbackrest_repo:                  # pgbackrest repo: https://pgbackrest.org/configuration.html#section-repository
      local:                          # default pgbackrest repo with local posix fs
        path: /pg/backup              # local backup directory, `/pg/backup` by default
        retention_full_type: count    # retention full backups by count
        retention_full: 2             # keep 2, at most 3 full backup when using local fs repo
      minio:                          # optional minio repo for pgbackrest
        type: s3                      # minio is s3-compatible, so s3 is used
        s3_endpoint: sss.pigsty       # minio endpoint domain name, `sss.pigsty` by default
        s3_region: us-east-1          # minio region, us-east-1 by default, useless for minio
        s3_bucket: pgsql              # minio bucket name, `pgsql` by default
        s3_key: pgbackrest            # minio user access key for pgbackrest
        s3_key_secret: S3User.Backup  # minio user secret key for pgbackrest
        s3_uri_style: path            # use path style uri for minio rather than host style
        path: /pgbackrest             # minio backup path, default is `/pgbackrest`
        storage_port: 9000            # minio port, 9000 by default
        storage_ca_file: /etc/pki/ca.crt  # minio ca file path, `/etc/pki/ca.crt` by default
        block: y                      # Enable block incremental backup
        bundle: y                     # bundle small files into a single file
        bundle_limit: 20MiB           # Limit for file bundles, 20MiB for object storage
        bundle_size: 128MiB           # Target size for file bundles, 128MiB for object storage
        cipher_type: aes-256-cbc      # enable AES encryption for remote backup repo
        cipher_pass: pgBackRest       # AES encryption password, default is 'pgBackRest'
        retention_full_type: time     # retention full backup by time on minio repo
        retention_full: 14            # keep full backup for last 14 days

    #----------------------------------#
    # Repo, Node, Packages
    #----------------------------------#
    repo_remove: true                 # remove existing repo on admin node during repo bootstrap
    node_repo_remove: true            # remove existing node repo for node managed by pigsty
    repo_extra_packages: [ pg18-main ] #,pg18-core ,pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]
    pg_version: 18                    # default postgres version
    #pg_extensions: [pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl ,pg18-olap]

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

配置解读

ha/full 模板是 Pigsty 的 完整功能演示配置,展示了多种组件的协同工作。

组件概览

组件节点分布说明
INFRA节点1监控/告警/Nginx/DNS
ETCD节点1DCS 服务
MinIO节点1S3 兼容存储
pg-meta节点1单节点 PostgreSQL
pg-test节点2-4三节点高可用 PostgreSQL
redis-ms节点1Redis 主从模式
redis-meta节点2Redis 哨兵模式
redis-test节点3-4Redis 原生集群模式

适用场景

  • Pigsty 功能演示与学习
  • 开发测试环境
  • 评估高可用架构
  • Redis 不同模式对比测试

与 ha/trio 的区别

  • 增加了第二套 PostgreSQL 集群(pg-test)
  • 增加了三种模式的 Redis 集群示例
  • 基础设施使用单节点(而非三节点)

注意事项

  • 此模板主要用于演示和测试,生产环境请参考 ha/trioha/safe
  • 默认启用 MinIO 备份,如不需要可注释相关配置

20 - ha/safe

安全加固的高可用配置模板,采用高标准的安全最佳实践

ha/safe 配置模板基于 ha/trio 模板修改,是一个安全加固的专用配置模板,采用高标准的安全最佳实践。


配置概览

  • 配置名称: ha/safe
  • 节点数量: 三节点(可选添加延迟副本)
  • 配置说明:安全加固的高可用配置模板,采用高标准的安全最佳实践
  • 适用系统:el8, el9, el10, d12, d13, u22, u24
  • 适用架构:x86_64(部分安全扩展在 ARM64 不可用)
  • 相关配置:ha/trioha/full

启用方式:

./configure -c ha/safe [-i <primary_ip>]

安全加固措施

ha/safe 模板实现了以下安全加固:

  • 强制 SSL 加密:PostgreSQL 和 PgBouncer 均启用 SSL
  • 强密码策略:使用 passwordcheck 扩展强制密码复杂度
  • 用户过期时间:所有用户设置 20 年过期时间
  • 最小化连接范围:限制 PostgreSQL/Patroni/PgBouncer 监听地址
  • 严格 HBA 规则:强制 SSL 认证,管理员需证书认证
  • 审计日志:记录连接和断开事件
  • 延迟副本:可选的 1 小时延迟副本,用于误操作恢复
  • 关键模板:使用 crit.yml 调优模板,零数据丢失

配置内容

源文件地址:pigsty/conf/ha/safe.yml

---
#==============================================================#
# File      :   safe.yml
# Desc      :   Pigsty 3-node security enhance template
# Ctime     :   2020-05-22
# Mtime     :   2025-12-12
# Docs      :   https://doc.pgsty.com/config
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#


#===== SECURITY ENHANCEMENT CONFIG TEMPLATE WITH 3 NODES ======#
#   * 3 infra nodes, 3 etcd nodes, single minio node
#   * 3-instance pgsql cluster with an extra delayed instance
#   * crit.yml templates, no data loss, checksum enforced
#   * enforce ssl on postgres & pgbouncer, use postgres by default
#   * enforce an expiration date for all users (20 years by default)
#   * enforce strong password policy with passwordcheck extension
#   * enforce changing default password for all users
#   * log connections and disconnections
#   * restrict listen ip address for postgres/patroni/pgbouncer


all:
  children:

    infra: # infra cluster for proxy, monitor, alert, etc
      hosts: # 1 for common usage, 3 nodes for production
        10.10.10.10: { infra_seq: 1 } # identity required
        10.10.10.11: { infra_seq: 2, repo_enabled: false }
        10.10.10.12: { infra_seq: 3, repo_enabled: false }
      vars: { patroni_watchdog_mode: off }

    minio: # minio cluster, s3 compatible object storage
      hosts: { 10.10.10.10: { minio_seq: 1 } }
      vars: { minio_cluster: minio }

    etcd: # dcs service for postgres/patroni ha consensus
      hosts: # 1 node for testing, 3 or 5 for production
        10.10.10.10: { etcd_seq: 1 }  # etcd_seq required
        10.10.10.11: { etcd_seq: 2 }  # assign from 1 ~ n
        10.10.10.12: { etcd_seq: 3 }  # odd number please
      vars: # cluster level parameter override roles/etcd
        etcd_cluster: etcd  # mark etcd cluster name etcd
        etcd_safeguard: false # safeguard against purging
        etcd_clean: true # purge etcd during init process

    pg-meta: # 3 instance postgres cluster `pg-meta`
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary }
        10.10.10.11: { pg_seq: 2, pg_role: replica }
        10.10.10.12: { pg_seq: 3, pg_role: replica , pg_offline_query: true }
      vars:
        pg_cluster: pg-meta
        pg_conf: crit.yml
        pg_users:
          - { name: dbuser_meta , password: Pleas3-ChangeThisPwd ,expire_in: 7300 ,pgbouncer: true ,roles: [ dbrole_admin ]    ,comment: pigsty admin user }
          - { name: dbuser_view , password: Make.3ure-Compl1ance  ,expire_in: 7300 ,pgbouncer: true ,roles: [ dbrole_readonly ] ,comment: read-only viewer for meta database }
        pg_databases:
          - { name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [ pigsty ] ,extensions: [ { name: vector } ] }
        pg_services:
          - { name: standby , ip: "*" ,port: 5435 , dest: default ,selector: "[]" , backup: "[? pg_role == `primary`]" }
        pg_listen: '${ip},${vip},${lo}'
        pg_vip_enabled: true
        pg_vip_address: 10.10.10.2/24
        pg_vip_interface: eth1

    # OPTIONAL delayed cluster for pg-meta
    #pg-meta-delay: # delayed instance for pg-meta (1 hour ago)
    #  hosts: { 10.10.10.13: { pg_seq: 1, pg_role: primary, pg_upstream: 10.10.10.10, pg_delay: 1h } }
    #  vars: { pg_cluster: pg-meta-delay }


  ####################################################################
  #                          Parameters                              #
  ####################################################################
  vars: # global variables
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default|china|europe
    node_tune: oltp                   # node tuning specs: oltp,olap,tiny,crit
    pg_conf: oltp.yml                 # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    #docker_registry_mirrors: ["https://docker.1panel.live","https://docker.1ms.run","https://docker.xuanyuan.me","https://registry-1.docker.io"]
    patroni_ssl_enabled: true         # secure patroni RestAPI communications with SSL?
    pgbouncer_sslmode: require        # pgbouncer client ssl mode: disable|allow|prefer|require|verify-ca|verify-full, disable by default
    pg_default_service_dest: postgres # default service destination to postgres instead of pgbouncer
    pgbackrest_method: minio          # pgbackrest repo method: local,minio,[user-defined...]

    #----------------------------------#
    # MinIO Related Options
    #----------------------------------#
    minio_users: # and configure `pgbackrest_repo` & `minio_users` accordingly
      - { access_key: dba , secret_key: S3User.DBA.Strong.Password, policy: consoleAdmin }
      - { access_key: pgbackrest , secret_key: Min10.bAckup ,policy: readwrite }
    pgbackrest_repo: # pgbackrest repo: https://pgbackrest.org/configuration.html#section-repository
      local: # default pgbackrest repo with local posix fs
        path: /pg/backup              # local backup directory, `/pg/backup` by default
        retention_full_type: count    # retention full backups by count
        retention_full: 2             # keep 2, at most 3 full backup when using local fs repo
      minio: # optional minio repo for pgbackrest
        s3_key: pgbackrest            # <-------- CHANGE THIS, SAME AS `minio_users` access_key
        s3_key_secret: Min10.bAckup   # <-------- CHANGE THIS, SAME AS `minio_users` secret_key
        cipher_pass: 'pgBR.${pg_cluster}'  # <-------- CHANGE THIS, you can use cluster name as part of password
        type: s3                      # minio is s3-compatible, so s3 is used
        s3_endpoint: sss.pigsty       # minio endpoint domain name, `sss.pigsty` by default
        s3_region: us-east-1          # minio region, us-east-1 by default, useless for minio
        s3_bucket: pgsql              # minio bucket name, `pgsql` by default
        s3_uri_style: path            # use path style uri for minio rather than host style
        path: /pgbackrest             # minio backup path, default is `/pgbackrest`
        storage_port: 9000            # minio port, 9000 by default
        storage_ca_file: /etc/pki/ca.crt  # minio ca file path, `/etc/pki/ca.crt` by default
        block: y                      # Enable block incremental backup
        bundle: y                     # bundle small files into a single file
        bundle_limit: 20MiB           # Limit for file bundles, 20MiB for object storage
        bundle_size: 128MiB           # Target size for file bundles, 128MiB for object storage
        cipher_type: aes-256-cbc      # enable AES encryption for remote backup repo
        retention_full_type: time     # retention full backup by time on minio repo
        retention_full: 14            # keep full backup for last 14 days


    #----------------------------------#
    # Access Control
    #----------------------------------#
    # add passwordcheck extension to enforce strong password policy
    pg_libs: '$libdir/passwordcheck, pg_stat_statements, auto_explain'
    pg_extensions:
      - passwordcheck, supautils, pgsodium, pg_vault, pg_session_jwt, anonymizer, pgsmcrypto, pgauditlogtofile, pgaudit #, pgaudit17, pgaudit16, pgaudit15, pgaudit14
      - pg_auth_mon, credcheck, pgcryptokey, pg_jobmon, logerrors, login_hook, set_user, pgextwlist, pg_auditor, sslutils, noset #pg_tde #pg_snakeoil
    pg_default_roles: # default roles and users in postgres cluster
      - { name: dbrole_readonly  ,login: false ,comment: role for global read-only access }
      - { name: dbrole_offline   ,login: false ,comment: role for restricted read-only access }
      - { name: dbrole_readwrite ,login: false ,roles: [ dbrole_readonly ]               ,comment: role for global read-write access }
      - { name: dbrole_admin     ,login: false ,roles: [ pg_monitor, dbrole_readwrite ]  ,comment: role for object creation }
      - { name: postgres     ,superuser: true  ,expire_in: 7300                        ,comment: system superuser }
      - { name: replicator ,replication: true  ,expire_in: 7300 ,roles: [ pg_monitor, dbrole_readonly ]   ,comment: system replicator }
      - { name: dbuser_dba   ,superuser: true  ,expire_in: 7300 ,roles: [ dbrole_admin ]  ,pgbouncer: true ,pool_mode: session, pool_connlimit: 16 , comment: pgsql admin user }
      - { name: dbuser_monitor ,roles: [ pg_monitor ] ,expire_in: 7300 ,pgbouncer: true ,parameters: { log_min_duration_statement: 1000 } ,pool_mode: session ,pool_connlimit: 8 ,comment: pgsql monitor user }
    pg_default_hba_rules: # postgres host-based auth rules by default, order by `order`
      - { user: '${dbsu}'    ,db: all         ,addr: local     ,auth: ident ,title: 'dbsu access via local os user ident'   ,order: 100}
      - { user: '${dbsu}'    ,db: replication ,addr: local     ,auth: ident ,title: 'dbsu replication from local os ident'  ,order: 150}
      - { user: '${repl}'    ,db: replication ,addr: localhost ,auth: ssl   ,title: 'replicator replication from localhost' ,order: 200}
      - { user: '${repl}'    ,db: replication ,addr: intra     ,auth: ssl   ,title: 'replicator replication from intranet'  ,order: 250}
      - { user: '${repl}'    ,db: postgres    ,addr: intra     ,auth: ssl   ,title: 'replicator postgres db from intranet'  ,order: 300}
      - { user: '${monitor}' ,db: all         ,addr: localhost ,auth: pwd   ,title: 'monitor from localhost with password'  ,order: 350}
      - { user: '${monitor}' ,db: all         ,addr: infra     ,auth: ssl   ,title: 'monitor from infra host with password' ,order: 400}
      - { user: '${admin}'   ,db: all         ,addr: infra     ,auth: ssl   ,title: 'admin @ infra nodes with pwd & ssl'    ,order: 450}
      - { user: '${admin}'   ,db: all         ,addr: world     ,auth: cert  ,title: 'admin @ everywhere with ssl & cert'    ,order: 500}
      - { user: '+dbrole_readonly',db: all    ,addr: localhost ,auth: ssl   ,title: 'pgbouncer read/write via local socket' ,order: 550}
      - { user: '+dbrole_readonly',db: all    ,addr: intra     ,auth: ssl   ,title: 'read/write biz user via password'      ,order: 600}
      - { user: '+dbrole_offline' ,db: all    ,addr: intra     ,auth: ssl   ,title: 'allow etl offline tasks from intranet' ,order: 650}
    pgb_default_hba_rules: # pgbouncer host-based authentication rules, order by `order`
      - { user: '${dbsu}'    ,db: pgbouncer   ,addr: local     ,auth: peer  ,title: 'dbsu local admin access with os ident' ,order: 100}
      - { user: 'all'        ,db: all         ,addr: localhost ,auth: pwd   ,title: 'allow all user local access with pwd'  ,order: 150}
      - { user: '${monitor}' ,db: pgbouncer   ,addr: intra     ,auth: ssl   ,title: 'monitor access via intranet with pwd'  ,order: 200}
      - { user: '${monitor}' ,db: all         ,addr: world     ,auth: deny  ,title: 'reject all other monitor access addr'  ,order: 250}
      - { user: '${admin}'   ,db: all         ,addr: intra     ,auth: ssl   ,title: 'admin access via intranet with pwd'    ,order: 300}
      - { user: '${admin}'   ,db: all         ,addr: world     ,auth: deny  ,title: 'reject all other admin access addr'    ,order: 350}
      - { user: 'all'        ,db: all         ,addr: intra     ,auth: ssl   ,title: 'allow all user intra access with pwd'  ,order: 400}

    #----------------------------------#
    # Repo, Node, Packages
    #----------------------------------#
    repo_remove: true                 # remove existing repo on admin node during repo bootstrap
    node_repo_remove: true            # remove existing node repo for node managed by pigsty
    #node_selinux_mode: enforcing     # set selinux mode: enforcing,permissive,disabled
    node_firewall_mode: zone          # firewall mode: off, none, zone, zone by default

    repo_extra_packages: [ pg18-main ] #,pg18-core ,pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]
    pg_version: 18                    # default postgres version
    #pg_extensions: [ pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    #grafana_admin_username: admin
    grafana_admin_password: You.Have2Use-A_VeryStrongPassword
    grafana_view_password: DBUser.Viewer
    #pg_admin_username: dbuser_dba
    pg_admin_password: PessWorb.Should8eStrong-eNough
    #pg_monitor_username: dbuser_monitor
    pg_monitor_password: MekeSuerYour.PassWordI5secured
    #pg_replication_username: replicator
    pg_replication_password: doNotUseThis-PasswordFor.AnythingElse
    #patroni_username: postgres
    patroni_password: don.t-forget-to-change-thEs3-password
    #haproxy_admin_username: admin
    haproxy_admin_password: GneratePasswordWith-pwgen-s-16-1
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

配置解读

ha/safe 模板是 Pigsty 的 安全加固配置,专为对安全性有较高要求的生产环境设计。

安全特性汇总

安全措施说明
SSL 加密PostgreSQL/PgBouncer/Patroni 全链路 SSL
强密码策略passwordcheck 扩展强制密码复杂度
用户过期所有用户 20 年过期(expire_in: 7300
严格 HBA管理员远程访问需要证书认证
备份加密MinIO 备份启用 AES-256-CBC 加密
审计日志pgaudit 扩展记录 SQL 审计日志
延迟副本1 小时延迟副本用于误操作恢复

适用场景

  • 金融、医疗、政务等高安全要求行业
  • 需要满足合规审计要求的环境
  • 对数据安全有极高要求的关键业务

注意事项

  • 部分安全扩展在 ARM64 架构不可用,请酌情启用
  • 所有默认密码必须修改为强密码
  • 建议配合定期安全审计使用

21 - ha/trio

三节点标准高可用配置模板,允许任意一台服务器宕机。

三节点是实现真正高可用的最小规格。ha/trio 模板使用三节点标准 HA 架构,INFRA、ETCD、PGSQL 三个核心模块均采用三节点部署,允许任意一台服务器宕机。


配置概览

  • 配置名称: ha/trio
  • 节点数量: 三节点
  • 配置说明:三节点标准高可用架构,允许任意一台服务器宕机
  • 适用系统:el8, el9, el10, d12, d13, u22, u24
  • 适用架构:x86_64, aarch64
  • 相关配置:ha/dualha/fullha/safe

启用方式:

./configure -c ha/trio [-i <primary_ip>]

配置生成后,需要将占位 IP 10.10.10.1110.10.10.12 修改为实际的节点 IP 地址。


配置内容

源文件地址:pigsty/conf/ha/trio.yml

---
#==============================================================#
# File      :   trio.yml
# Desc      :   Pigsty 3-node security enhance template
# Ctime     :   2020-05-22
# Mtime     :   2025-12-12
# Docs      :   https://doc.pgsty.com/config
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# 3 infra node, 3 etcd node, 3 pgsql node, and 1 minio node

all:

  #==============================================================#
  # Clusters, Nodes, and Modules
  #==============================================================#
  children:

    #----------------------------------#
    # infra: monitor, alert, repo, etc..
    #----------------------------------#
    infra: # infra cluster for proxy, monitor, alert, etc
      hosts: # 1 for common usage, 3 nodes for production
        10.10.10.10: { infra_seq: 1 } # identity required
        10.10.10.11: { infra_seq: 2, repo_enabled: false }
        10.10.10.12: { infra_seq: 3, repo_enabled: false }
      vars:
        patroni_watchdog_mode: off # do not fencing infra

    etcd: # dcs service for postgres/patroni ha consensus
      hosts: # 1 node for testing, 3 or 5 for production
        10.10.10.10: { etcd_seq: 1 }  # etcd_seq required
        10.10.10.11: { etcd_seq: 2 }  # assign from 1 ~ n
        10.10.10.12: { etcd_seq: 3 }  # odd number please
      vars: # cluster level parameter override roles/etcd
        etcd_cluster: etcd  # mark etcd cluster name etcd
        etcd_safeguard: false # safeguard against purging
        etcd_clean: true # purge etcd during init process

    minio: # minio cluster, s3 compatible object storage
      hosts: { 10.10.10.10: { minio_seq: 1 } }
      vars: { minio_cluster: minio }

    pg-meta: # 3 instance postgres cluster `pg-meta`
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary }
        10.10.10.11: { pg_seq: 2, pg_role: replica }
        10.10.10.12: { pg_seq: 3, pg_role: replica , pg_offline_query: true }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - { name: dbuser_meta , password: DBUser.Meta ,pgbouncer: true ,roles: [ dbrole_admin ]    ,comment: pigsty admin user }
          - { name: dbuser_view , password: DBUser.View ,pgbouncer: true ,roles: [ dbrole_readonly ] ,comment: read-only viewer for meta database }
        pg_databases:
          - { name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [ pigsty ] ,extensions: [ { name: vector } ] }
        pg_vip_enabled: true
        pg_vip_address: 10.10.10.2/24
        pg_vip_interface: eth1


  #==============================================================#
  # Global Parameters
  #==============================================================#
  vars:

    #----------------------------------#
    # Meta Data
    #----------------------------------#
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default|china|europe
    node_tune: oltp                   # node tuning specs: oltp,olap,tiny,crit
    pg_conf: oltp.yml                 # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    #docker_registry_mirrors: ["https://docker.1panel.live","https://docker.1ms.run","https://docker.xuanyuan.me","https://registry-1.docker.io"]
    proxy_env:                        # global proxy env when downloading packages
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.myqcloud.com,*.tsinghua.edu.cn"
      # http_proxy:  # set your proxy here: e.g http://user:[email protected]
      # https_proxy: # set your proxy here: e.g http://user:[email protected]
      # all_proxy:   # set your proxy here: e.g http://user:[email protected]
    infra_portal:                     # infra services exposed via portal
      home         : { domain: i.pigsty }     # default domain name
      #minio        : { domain: m.pigsty ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }

    #----------------------------------#
    # Repo, Node, Packages
    #----------------------------------#
    repo_remove: true                 # remove existing repo on admin node during repo bootstrap
    node_repo_remove: true            # remove existing node repo for node managed by pigsty
    repo_extra_packages: [ pg18-main ] #,pg18-core ,pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]
    pg_version: 18                    # default postgres version
    #pg_extensions: [ pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

配置解读

ha/trio 模板是 Pigsty 的 标准高可用配置,提供真正的故障自动恢复能力。

架构说明

  • 三节点 INFRA:Prometheus/Grafana/Nginx 分布式部署
  • 三节点 ETCD:DCS 多数派选举,容忍单点故障
  • 三节点 PostgreSQL:一主两从,自动故障转移
  • 单节点 MinIO:可按需扩展为多节点

高可用保障

  • ETCD 三节点可容忍一节点故障,保持多数派
  • PostgreSQL 主库故障时,Patroni 自动选举新主
  • L2 VIP 随主库漂移,应用无需修改连接配置

适用场景

  • 生产环境最小高可用部署
  • 需要自动故障转移的关键业务
  • 作为更大规模部署的基础架构

扩展建议

  • 需要更强数据安全性,参考 ha/safe 模板
  • 需要更多演示功能,参考 ha/full 模板
  • 生产环境建议启用 pgbackrest_method: minio 远程备份

22 - ha/dual

双节点配置模板,有限高可用部署,允许宕机特定一台服务器。

ha/dual 模板使用双节点部署,实现一主一备的"半高可用"架构。如果您只有两台服务器,这是一个务实的选择。


配置概览

  • 配置名称: ha/dual
  • 节点数量: 双节点
  • 配置说明:两节点有限高可用部署,允许特定一台服务器宕机
  • 适用系统:el8, el9, el10, d12, d13, u22, u24
  • 适用架构:x86_64, aarch64
  • 相关配置:ha/trioslim

启用方式:

./configure -c ha/dual [-i <primary_ip>]

配置生成后,需要将占位 IP 10.10.10.11 修改为实际的备库节点 IP 地址。


配置内容

源文件地址:pigsty/conf/ha/dual.yml

---
#==============================================================#
# File      :   dual.yml
# Desc      :   Pigsty deployment example for two nodes
# Ctime     :   2020-05-22
# Mtime     :   2025-12-12
# Docs      :   https://doc.pgsty.com/config
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#


# It is recommended to use at least three nodes in production deployment.
# But sometimes, there are only two nodes available, that's dual.yml for
#
# In this setup, we have two nodes, .10 (admin_node) and .11 (pgsql_priamry):
#
# If .11 is down, .10 will take over since the dcs:etcd is still alive
# If .10 is down, .11 (pgsql primary) will still be functioning as a primary if:
#   - Only dcs:etcd is down
#   - Only pgsql is down
# if both etcd & pgsql are down (e.g. node down), the primary will still demote itself.


all:
  children:

    # infra cluster for proxy, monitor, alert, etc..
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }

    # etcd cluster for ha postgres
    etcd: { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }

    # minio cluster, optional backup repo for pgbackrest
    #minio: { hosts: { 10.10.10.10: { minio_seq: 1 } }, vars: { minio_cluster: minio } }

    # postgres cluster 'pg-meta' with single primary instance
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: replica }
        10.10.10.11: { pg_seq: 2, pg_role: primary }  # <----- use this as primary by default
      vars:
        pg_cluster: pg-meta
        pg_databases: [ { name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [ pigsty ] ,extensions: [ { name: vector }] } ]
        pg_users:
          - { name: dbuser_meta ,password: DBUser.Meta   ,pgbouncer: true ,roles: [ dbrole_admin ]    ,comment: pigsty admin user }
          - { name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [ dbrole_readonly ] ,comment: read-only viewer for meta database }
        node_crontab: [ '00 01 * * * postgres /pg/bin/pg-backup full' ] # make a full backup every 1am
        pg_vip_enabled: true
        pg_vip_address: 10.10.10.2/24
        pg_vip_interface: eth1

  vars:                               # global parameters
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default,china,europe
    node_tune: oltp                   # node tuning specs: oltp,olap,tiny,crit
    pg_conf: oltp.yml                 # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    #docker_registry_mirrors: ["https://docker.1panel.live","https://docker.1ms.run","https://docker.xuanyuan.me","https://registry-1.docker.io"]
    infra_portal:                     # domain names and upstream servers
      home   : { domain: i.pigsty }
      #minio : { domain: m.pigsty ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }

    #----------------------------------#
    # Repo, Node, Packages
    #----------------------------------#
    repo_remove: true                 # remove existing repo on admin node during repo bootstrap
    node_repo_remove: true            # remove existing node repo for node managed by pigsty
    repo_extra_packages: [ pg18-main ] #,pg18-core ,pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]
    pg_version: 18                    # default postgres version
    #pg_extensions: [ pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

配置解读

ha/dual 模板是 Pigsty 的 双节点有限高可用配置,专为只有两台服务器的场景设计。

架构说明

  • 节点A (10.10.10.10):管理节点,运行 Infra + etcd + PostgreSQL 备库
  • 节点B (10.10.10.11):数据节点,仅运行 PostgreSQL 主库

故障场景分析

故障节点影响是否自动恢复
节点B 宕机主库切换到节点A自动
节点A etcd 宕机主库继续运行(无 DCS)需人工
节点A pgsql 宕机主库继续运行需人工
节点A 完全宕机主库降级为单机需人工

适用场景

  • 仅有两台服务器的预算受限环境
  • 可接受部分故障场景需要人工介入
  • 作为三节点高可用的过渡方案

注意事项

  • 真正的高可用需要至少三节点(DCS 需要多数派)
  • 建议尽快升级到三节点架构
  • L2 VIP 需要网络环境支持(同一广播域)

23 - 应用模版

24 - app/odoo

使用 Pigsty 托管的 PostgreSQL 部署 Odoo 开源 ERP 系统

app/odoo 配置模板提供了自建 Odoo 开源 ERP 系统的参考配置,使用 Pigsty 托管的 PostgreSQL 作为数据库。

更多细节,请参考 Odoo 部署教程


配置概览

  • 配置名称: app/odoo
  • 节点数量: 单节点
  • 配置说明:使用 Pigsty 托管的 PostgreSQL 部署 Odoo ERP
  • 适用系统:el8, el9, el10, d12, d13, u22, u24
  • 适用架构:x86_64, aarch64
  • 相关配置:meta

启用方式:

./configure -c app/odoo [-i <primary_ip>]

配置内容

源文件地址:pigsty/conf/app/odoo.yml

---
#==============================================================#
# File      :   odoo.yml
# Desc      :   pigsty config for running 1-node odoo app
# Ctime     :   2025-01-11
# Mtime     :   2025-12-12
# Docs      :   https://doc.pgsty.com/app/odoo
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# tutorial: https://doc.pgsty.com/app/odoo
# how to use this template:
#
#  curl -fsSL https://repo.pigsty.io/get | bash; cd ~/pigsty
# ./bootstrap               # prepare local repo & ansible
# ./configure -c app/odoo   # Use this odoo config template
# vi pigsty.yml             # IMPORTANT: CHANGE CREDENTIALS!!
# ./deploy.yml              # install pigsty & pgsql & minio
# ./docker.yml              # install docker & docker-compose
# ./app.yml                 # install odoo

all:
  children:

    # the odoo application (default username & password: admin/admin)
    odoo:
      hosts: { 10.10.10.10: {} }
      vars:
        app: odoo   # specify app name to be installed (in the apps)
        apps:       # define all applications
          odoo:     # app name, should have corresponding ~/pigsty/app/odoo folder
            file:   # optional directory to be created
              - { path: /data/odoo         ,state: directory, owner: 100, group: 101 }
              - { path: /data/odoo/webdata ,state: directory, owner: 100, group: 101 }
              - { path: /data/odoo/addons  ,state: directory, owner: 100, group: 101 }
            conf:   # override /opt/<app>/.env config file
              PG_HOST: 10.10.10.10            # postgres host
              PG_PORT: 5432                   # postgres port
              PG_USERNAME: odoo               # postgres user
              PG_PASSWORD: DBUser.Odoo        # postgres password
              ODOO_PORT: 8069                 # odoo app port
              ODOO_DATA: /data/odoo/webdata   # odoo webdata
              ODOO_ADDONS: /data/odoo/addons  # odoo plugins
              ODOO_DBNAME: odoo               # odoo database name
              ODOO_VERSION: 19.0              # odoo image version

    # the odoo database
    pg-odoo:
      hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
      vars:
        pg_cluster: pg-odoo
        pg_users:
          - { name: odoo    ,password: DBUser.Odoo ,pgbouncer: true ,roles: [ dbrole_admin ] ,createdb: true ,comment: admin user for odoo service }
          - { name: odoo_ro ,password: DBUser.Odoo ,pgbouncer: true ,roles: [ dbrole_readonly ]  ,comment: read only user for odoo service  }
          - { name: odoo_rw ,password: DBUser.Odoo ,pgbouncer: true ,roles: [ dbrole_readwrite ] ,comment: read write user for odoo service }
        pg_databases:
          - { name: odoo ,owner: odoo ,revokeconn: true ,comment: odoo main database  }
        pg_hba_rules:
          - { user: all ,db: all ,addr: 172.17.0.0/16  ,auth: pwd ,title: 'allow access from local docker network' }
          - { user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes' }
        node_crontab: [ '00 01 * * * postgres /pg/bin/pg-backup full' ] # make a full backup every 1am

    infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }
    etcd:  { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }
    #minio: { hosts: { 10.10.10.10: { minio_seq: 1 } }, vars: { minio_cluster: minio } }

  vars:                               # global variables
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default|china|europe
    node_tune: oltp                   # node tuning specs: oltp,olap,tiny,crit
    pg_conf: oltp.yml                 # pgsql tuning specs: {oltp,olap,tiny,crit}.yml

    docker_enabled: true              # enable docker on app group
    #docker_registry_mirrors: ["https://docker.1panel.live","https://docker.1ms.run","https://docker.xuanyuan.me","https://registry-1.docker.io"]

    proxy_env:                        # global proxy env when downloading packages & pull docker images
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.tsinghua.edu.cn"
      #http_proxy:  127.0.0.1:12345 # add your proxy env here for downloading packages or pull images
      #https_proxy: 127.0.0.1:12345 # usually the proxy is format as http://user:[email protected]
      #all_proxy:   127.0.0.1:12345

    infra_portal:                     # domain names and upstream servers
      home  : { domain: i.pigsty }
      minio : { domain: m.pigsty ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }
      odoo:                           # nginx server config for odoo
        domain: odoo.pigsty           # REPLACE WITH YOUR OWN DOMAIN!
        endpoint: "10.10.10.10:8069"  # odoo service endpoint: IP:PORT
        websocket: true               # add websocket support
        certbot: odoo.pigsty          # certbot cert name, apply with `make cert`

    repo_enabled: false
    node_repo_modules: node,infra,pgsql
    pg_version: 18

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

配置解读

app/odoo 模板提供了 Odoo 开源 ERP 系统的一键部署方案。

Odoo 是什么

  • 全球最流行的开源 ERP 系统
  • 覆盖 CRM、销售、采购、库存、财务、HR 等企业管理模块
  • 支持数千个社区和官方应用扩展
  • 提供 Web 界面和移动端支持

关键特性

  • 使用 Pigsty 管理的 PostgreSQL 替代 Odoo 自带的数据库
  • 支持 Odoo 19.0 最新版本
  • 数据持久化到独立目录 /data/odoo
  • 支持自定义插件目录 /data/odoo/addons

访问方式

# Odoo Web 界面
http://odoo.pigsty:8069

# 默认管理员账号
用户名: admin
密码: admin (首次登录时设置)

适用场景

  • 中小企业 ERP 系统
  • 替代 SAP、Oracle ERP 等商业解决方案
  • 需要自定义业务流程的企业应用

注意事项

  • Odoo 容器以 uid=100, gid=101 运行,数据目录需要正确的权限
  • 首次访问时需要创建数据库和设置管理员密码
  • 生产环境建议启用 HTTPS
  • 可通过 /data/odoo/addons 安装自定义模块

25 - app/dify

使用 Pigsty 托管的 PostgreSQL 部署 Dify AI 应用开发平台

app/dify 配置模板提供了自建 Dify AI 应用开发平台的参考配置,使用 Pigsty 托管的 PostgreSQL 和 pgvector 作为向量存储。

更多细节,请参考 Dify 部署教程


配置概览

  • 配置名称: app/dify
  • 节点数量: 单节点
  • 配置说明:使用 Pigsty 托管的 PostgreSQL 部署 Dify
  • 适用系统:el8, el9, el10, d12, d13, u22, u24
  • 适用架构:x86_64, aarch64
  • 相关配置:meta

启用方式:

./configure -c app/dify [-i <primary_ip>]

配置内容

源文件地址:pigsty/conf/app/dify.yml

---
#==============================================================#
# File      :   dify.yml
# Desc      :   pigsty config for running 1-node dify app
# Ctime     :   2025-02-24
# Mtime     :   2025-12-12
# Docs      :   https://doc.pgsty.com/app/odoo
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#
# Last Verified Dify Version: v1.8.1 on 2025-0908
# tutorial: https://doc.pgsty.com/app/dify
# how to use this template:
#
#  curl -fsSL https://repo.pigsty.io/get | bash; cd ~/pigsty
# ./bootstrap               # prepare local repo & ansible
# ./configure -c app/dify   # use this dify config template
# vi pigsty.yml             # IMPORTANT: CHANGE CREDENTIALS!!
# ./deploy.yml              # install pigsty & pgsql & minio
# ./docker.yml              # install docker & docker-compose
# ./app.yml                 # install dify with docker-compose
#
# To replace domain name:
#   sed -ie 's/dify.pigsty/dify.pigsty.cc/g' pigsty.yml


all:
  children:

    # the dify application
    dify:
      hosts: { 10.10.10.10: {} }
      vars:
        app: dify   # specify app name to be installed (in the apps)
        apps:       # define all applications
          dify:     # app name, should have corresponding ~/pigsty/app/dify folder
            file:   # data directory to be created
              - { path: /data/dify ,state: directory ,mode: 0755 }
            conf:   # override /opt/dify/.env config file

              # change domain, mirror, proxy, secret key
              NGINX_SERVER_NAME: dify.pigsty
              # A secret key for signing and encryption, gen with `openssl rand -base64 42` (CHANGE PASSWORD!)
              SECRET_KEY: sk-somerandomkey
              # expose DIFY nginx service with port 5001 by default
              DIFY_PORT: 5001
              # where to store dify files? the default is ./volume, we'll use another volume created above
              DIFY_DATA: /data/dify

              # proxy and mirror settings
              #PIP_MIRROR_URL: https://pypi.tuna.tsinghua.edu.cn/simple
              #SANDBOX_HTTP_PROXY: http://10.10.10.10:12345
              #SANDBOX_HTTPS_PROXY: http://10.10.10.10:12345

              # database credentials
              DB_USERNAME: dify
              DB_PASSWORD: difyai123456
              DB_HOST: 10.10.10.10
              DB_PORT: 5432
              DB_DATABASE: dify
              VECTOR_STORE: pgvector
              PGVECTOR_HOST: 10.10.10.10
              PGVECTOR_PORT: 5432
              PGVECTOR_USER: dify
              PGVECTOR_PASSWORD: difyai123456
              PGVECTOR_DATABASE: dify
              PGVECTOR_MIN_CONNECTION: 2
              PGVECTOR_MAX_CONNECTION: 10

    pg-meta:
      hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - { name: dify ,password: difyai123456 ,pgbouncer: true ,roles: [ dbrole_admin ] ,superuser: true ,comment: dify superuser }
        pg_databases:
          - { name: dify        ,owner: dify ,revokeconn: true ,comment: dify main database  }
          - { name: dify_plugin ,owner: dify ,revokeconn: true ,comment: dify plugin_daemon database }
        pg_hba_rules:
          - { user: dify ,db: all ,addr: 172.17.0.0/16  ,auth: pwd ,title: 'allow dify access from local docker network' }
        node_crontab: [ '00 01 * * * postgres /pg/bin/pg-backup full' ] # make a full backup every 1am

    infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }
    etcd:  { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }
    #minio: { hosts: { 10.10.10.10: { minio_seq: 1 } }, vars: { minio_cluster: minio } }

  vars:                               # global variables
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default|china|europe
    node_tune: oltp                   # node tuning specs: oltp,olap,tiny,crit
    pg_conf: oltp.yml                 # pgsql tuning specs: {oltp,olap,tiny,crit}.yml

    docker_enabled: true              # enable docker on app group
    #docker_registry_mirrors: ["https://docker.1panel.live","https://docker.1ms.run","https://docker.xuanyuan.me","https://registry-1.docker.io"]

    proxy_env:                        # global proxy env when downloading packages & pull docker images
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.tsinghua.edu.cn"
      #http_proxy:  127.0.0.1:12345 # add your proxy env here for downloading packages or pull images
      #https_proxy: 127.0.0.1:12345 # usually the proxy is format as http://user:[email protected]
      #all_proxy:   127.0.0.1:12345

    infra_portal:                     # domain names and upstream servers
      home   :  { domain: i.pigsty }
      #minio :  { domain: m.pigsty    ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }
      dify:                            # nginx server config for dify
        domain: dify.pigsty            # REPLACE WITH YOUR OWN DOMAIN!
        endpoint: "10.10.10.10:5001"   # dify service endpoint: IP:PORT
        websocket: true                # add websocket support
        certbot: dify.pigsty           # certbot cert name, apply with `make cert`

    repo_enabled: false
    node_repo_modules: node,infra,pgsql
    pg_version: 18

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

配置解读

app/dify 模板提供了 Dify AI 应用开发平台的一键部署方案。

Dify 是什么

  • 开源的 LLM 应用开发平台
  • 支持 RAG、Agent、Workflow 等 AI 应用模式
  • 提供可视化的 Prompt 编排和应用构建界面
  • 支持多种 LLM 后端(OpenAI、Claude、本地模型等)

关键特性

  • 使用 Pigsty 管理的 PostgreSQL 替代 Dify 自带的数据库
  • 使用 pgvector 作为向量存储(替代 Weaviate/Qdrant)
  • 支持 HTTPS 和自定义域名
  • 数据持久化到独立目录 /data/dify

访问方式

# Dify Web 界面
http://dify.pigsty:5001

# 或通过 Nginx 代理
https://dify.pigsty

适用场景

  • 企业内部 AI 应用开发平台
  • RAG 知识库问答系统
  • LLM 驱动的自动化工作流
  • AI Agent 开发与部署

注意事项

  • 必须修改 SECRET_KEY,使用 openssl rand -base64 42 生成
  • 需要配置 LLM API 密钥(如 OpenAI API Key)
  • Docker 网络需要能访问 PostgreSQL(已配置 172.17.0.0/16 HBA 规则)
  • 建议配置代理以加速 Python 包下载

26 - app/electric

使用 Pigsty 托管的 PostgreSQL 部署 Electric 实时同步服务

app/electric 配置模板提供了部署 Electric SQL 实时同步服务的参考配置,实现 PostgreSQL 到客户端的实时数据同步。


配置概览

  • 配置名称: app/electric
  • 节点数量: 单节点
  • 配置说明:使用 Pigsty 托管的 PostgreSQL 部署 Electric 实时同步
  • 适用系统:el8, el9, el10, d12, d13, u22, u24
  • 适用架构:x86_64, aarch64
  • 相关配置:meta

启用方式:

./configure -c app/electric [-i <primary_ip>]

配置内容

源文件地址:pigsty/conf/app/electric.yml

---
#==============================================================#
# File      :   electric.yml
# Desc      :   pigsty config for running 1-node electric app
# Ctime     :   2025-03-29
# Mtime     :   2025-12-12
# Docs      :   https://doc.pgsty.com/app/odoo
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# tutorial: https://doc.pgsty.com/app/electric
# quick start: https://electric-sql.com/docs/quickstart
# how to use this template:
#
#  curl -fsSL https://repo.pigsty.io/get | bash; cd ~/pigsty
# ./bootstrap                 # prepare local repo & ansible
# ./configure -c app/electric # use this dify config template
# vi pigsty.yml               # IMPORTANT: CHANGE CREDENTIALS!!
# ./deploy.yml                # install pigsty & pgsql & minio
# ./docker.yml                # install docker & docker-compose
# ./app.yml                   # install dify with docker-compose

all:
  children:
    # infra cluster for proxy, monitor, alert, etc..
    infra:
      hosts: { 10.10.10.10: { infra_seq: 1 } }
      vars:

        app: electric
        apps:       # define all applications
          electric: # app name, should have corresponding ~/pigsty/app/electric folder
            conf:   # override /opt/electric/.env config file : https://electric-sql.com/docs/api/config
              DATABASE_URL: 'postgresql://electric:[email protected]:5432/electric?sslmode=require'
              ELECTRIC_PORT: 8002
              ELECTRIC_PROMETHEUS_PORT: 8003
              ELECTRIC_INSECURE: true
              #ELECTRIC_SECRET: 1U6ItbhoQb4kGUU5wXBLbxvNf

    # etcd cluster for ha postgres
    etcd: { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }

    # minio cluster, s3 compatible object storage
    #minio: { hosts: { 10.10.10.10: { minio_seq: 1 } }, vars: { minio_cluster: minio } }

    # postgres example cluster: pg-meta
    pg-meta:
      hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - {name: electric ,password: DBUser.Electric ,pgbouncer: true , replication: true ,roles: [dbrole_admin] ,comment: electric main user }
        pg_databases: [{ name: electric , owner: electric }]
        pg_hba_rules:
          - { user: electric , db: replication ,addr: infra ,auth: ssl ,title: 'allow electric intranet/docker ssl access' }

  #==============================================================#
  # Global Parameters
  #==============================================================#
  vars:

    #----------------------------------#
    # Meta Data
    #----------------------------------#
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default|china|europe
    node_tune: oltp                   # node tuning specs: oltp,olap,tiny,crit
    pg_conf: oltp.yml                 # pgsql tuning specs: {oltp,olap,tiny,crit}.yml

    docker_enabled: true              # enable docker on app group
    #docker_registry_mirrors: ["https://docker.1panel.live","https://docker.1ms.run","https://docker.xuanyuan.me","https://registry-1.docker.io"]

    proxy_env:                        # global proxy env when downloading packages
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.myqcloud.com,*.tsinghua.edu.cn"
      # http_proxy:  # set your proxy here: e.g http://user:[email protected]
      # https_proxy: # set your proxy here: e.g http://user:[email protected]
      # all_proxy:   # set your proxy here: e.g http://user:[email protected]
    infra_portal:                     # domain names and upstream servers
      home : { domain: i.pigsty }
      electric:
        domain: elec.pigsty
        endpoint: "${admin_ip}:8002"
        websocket: true               # apply free ssl cert with certbot: make cert
        certbot: odoo.pigsty          # <----- replace with your own domain name!

    #----------------------------------#
    # Safe Guard
    #----------------------------------#
    # you can enable these flags after bootstrap, to prevent purging running etcd / pgsql instances
    etcd_safeguard: false             # prevent purging running etcd instance?
    pg_safeguard: false               # prevent purging running postgres instance? false by default

    #----------------------------------#
    # Repo, Node, Packages
    #----------------------------------#
    repo_enabled: false
    node_repo_modules: node,infra,pgsql
    pg_version: 18                    # default postgres version
    #pg_extensions: [ pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

配置解读

app/electric 模板提供了 Electric SQL 实时同步服务的一键部署方案。

Electric 是什么

  • PostgreSQL 到客户端的实时数据同步服务
  • 支持离线优先 (Local-first) 应用架构
  • 通过逻辑复制实时同步数据变更
  • 提供 HTTP API 供前端应用消费

关键特性

  • 使用 Pigsty 管理的 PostgreSQL 作为数据源
  • 通过逻辑复制 (Logical Replication) 捕获数据变更
  • 支持 SSL 加密连接
  • 内置 Prometheus 指标端点

访问方式

# Electric API 端点
http://elec.pigsty:8002

# Prometheus 指标
http://elec.pigsty:8003/metrics

适用场景

  • 构建离线优先 (Local-first) 应用
  • 需要实时数据同步到客户端
  • 移动应用和 PWA 的数据同步
  • 协作应用的实时更新

注意事项

  • Electric 用户需要 replication 权限
  • 需要启用 PostgreSQL 逻辑复制
  • 生产环境建议使用 SSL 连接(已配置 sslmode=require

27 - app/maybe

使用 Pigsty 托管的 PostgreSQL 部署 Maybe 个人财务管理系统

app/maybe 配置模板提供了部署 Maybe 开源个人财务管理系统的参考配置,使用 Pigsty 托管的 PostgreSQL 作为数据库。


配置概览

  • 配置名称: app/maybe
  • 节点数量: 单节点
  • 配置说明:使用 Pigsty 托管的 PostgreSQL 部署 Maybe 财务管理
  • 适用系统:el8, el9, el10, d12, d13, u22, u24
  • 适用架构:x86_64, aarch64
  • 相关配置:meta

启用方式:

./configure -c app/maybe [-i <primary_ip>]

配置内容

源文件地址:pigsty/conf/app/maybe.yml

---
#==============================================================#
# File      :   maybe.yml
# Desc      :   pigsty config for running 1-node maybe app
# Ctime     :   2025-09-08
# Mtime     :   2025-12-12
# Docs      :   https://doc.pgsty.com/app/maybe
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# tutorial: https://doc.pgsty.com/app/maybe
# how to use this template:
#
#  curl -fsSL https://repo.pigsty.io/get | bash; cd ~/pigsty
# ./bootstrap               # prepare local repo & ansible
# ./configure -c app/maybe  # Use this maybe config template
# vi pigsty.yml             # IMPORTANT: CHANGE CREDENTIALS!!
# ./deploy.yml              # install pigsty & pgsql
# ./docker.yml              # install docker & docker-compose
# ./app.yml                 # install maybe

all:
  children:

    # the maybe application (personal finance management)
    maybe:
      hosts: { 10.10.10.10: {} }
      vars:
        app: maybe   # specify app name to be installed (in the apps)
        apps:        # define all applications
          maybe:     # app name, should have corresponding ~/pigsty/app/maybe folder
            file:    # optional directory to be created
              - { path: /data/maybe             ,state: directory ,mode: 0755 }
              - { path: /data/maybe/storage     ,state: directory ,mode: 0755 }
            conf:    # override /opt/<app>/.env config file
              # Core Configuration
              MAYBE_VERSION: latest                    # Maybe image version
              MAYBE_PORT: 5002                         # Port to expose Maybe service
              MAYBE_DATA: /data/maybe                  # Data directory for Maybe
              APP_DOMAIN: maybe.pigsty                 # Domain name for Maybe
              
              # REQUIRED: Generate with: openssl rand -hex 64
              SECRET_KEY_BASE: sk-somerandomkey
              
              # Database Configuration
              DB_HOST: 10.10.10.10                    # PostgreSQL host
              DB_PORT: 5432                           # PostgreSQL port
              DB_USERNAME: maybe                      # PostgreSQL username
              DB_PASSWORD: MaybeFinance2025           # PostgreSQL password (CHANGE THIS!)
              DB_DATABASE: maybe_production           # PostgreSQL database name
              
              # Optional: API Integration
              #SYNTH_API_KEY:                         # Get from synthfinance.com

    # the maybe database
    pg-maybe:
      hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
      vars:
        pg_cluster: pg-maybe
        pg_users:
          - { name: maybe    ,password: MaybeFinance2025 ,pgbouncer: true ,roles: [ dbrole_admin ] ,createdb: true ,comment: admin user for maybe service }
          - { name: maybe_ro ,password: MaybeFinance2025 ,pgbouncer: true ,roles: [ dbrole_readonly ]  ,comment: read only user for maybe service  }
          - { name: maybe_rw ,password: MaybeFinance2025 ,pgbouncer: true ,roles: [ dbrole_readwrite ] ,comment: read write user for maybe service }
        pg_databases:
          - { name: maybe_production ,owner: maybe ,revokeconn: true ,comment: maybe main database  }
        pg_hba_rules:
          - { user: maybe ,db: all ,addr: 172.17.0.0/16  ,auth: pwd ,title: 'allow maybe access from local docker network' }
          - { user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes' }
        node_crontab: [ '00 01 * * * postgres /pg/bin/pg-backup full' ] # make a full backup every 1am

    infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }
    etcd:  { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }
    #minio: { hosts: { 10.10.10.10: { minio_seq: 1 } }, vars: { minio_cluster: minio } }

  vars:                               # global variables
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default|china|europe
    node_tune: oltp                   # node tuning specs: oltp,olap,tiny,crit
    pg_conf: oltp.yml                 # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    docker_enabled: true              # enable docker on app group
    #docker_registry_mirrors: ["https://docker.1panel.live","https://docker.1ms.run","https://docker.xuanyuan.me","https://registry-1.docker.io"]

    proxy_env:                        # global proxy env when downloading packages & pull docker images
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.tsinghua.edu.cn"
      #http_proxy:  127.0.0.1:12345 # add your proxy env here for downloading packages or pull images
      #https_proxy: 127.0.0.1:12345 # usually the proxy is format as http://user:[email protected]
      #all_proxy:   127.0.0.1:12345

    infra_portal:                     # infra services exposed via portal
      home  : { domain: i.pigsty }    # default domain name
      minio : { domain: m.pigsty ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }
      maybe:                          # nginx server config for maybe
        domain: maybe.pigsty          # REPLACE WITH YOUR OWN DOMAIN!
        endpoint: "10.10.10.10:5002"  # maybe service endpoint: IP:PORT
        websocket: true               # add websocket support

    repo_enabled: false
    node_repo_modules: node,infra,pgsql

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root

...

配置解读

app/maybe 模板提供了 Maybe 开源个人财务管理系统的一键部署方案。

Maybe 是什么

  • 开源的个人和家庭财务管理系统
  • 支持多账户、多币种资产追踪
  • 提供投资组合分析和净值计算
  • 美观现代的 Web 界面

关键特性

  • 使用 Pigsty 管理的 PostgreSQL 替代 Maybe 自带的数据库
  • 数据持久化到独立目录 /data/maybe
  • 支持 HTTPS 和自定义域名
  • 提供多用户权限管理

访问方式

# Maybe Web 界面
http://maybe.pigsty:5002

# 或通过 Nginx 代理
https://maybe.pigsty

适用场景

  • 个人或家庭财务管理
  • 投资组合追踪和分析
  • 多账户资产汇总
  • 替代 Mint、YNAB 等商业服务

注意事项

  • 必须修改 SECRET_KEY_BASE,使用 openssl rand -hex 64 生成
  • 首次访问时需要注册管理员账号
  • 可选配置 Synth API 以获取股票价格数据

28 - app/teable

使用 Pigsty 托管的 PostgreSQL 部署 Teable 开源 Airtable 替代品

app/teable 配置模板提供了部署 Teable 开源无代码数据库的参考配置,使用 Pigsty 托管的 PostgreSQL 作为数据库。


配置概览

  • 配置名称: app/teable
  • 节点数量: 单节点
  • 配置说明:使用 Pigsty 托管的 PostgreSQL 部署 Teable
  • 适用系统:el8, el9, el10, d12, d13, u22, u24
  • 适用架构:x86_64, aarch64
  • 相关配置:meta

启用方式:

./configure -c app/teable [-i <primary_ip>]

配置内容

源文件地址:pigsty/conf/app/teable.yml

---
#==============================================================#
# File      :   teable.yml
# Desc      :   pigsty config for running 1-node teable app
# Ctime     :   2025-02-24
# Mtime     :   2025-12-12
# Docs      :   https://doc.pgsty.com/app/odoo
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# tutorial: https://doc.pgsty.com/app/teable
# how to use this template:
#
#  curl -fsSL https://repo.pigsty.io/get | bash; cd ~/pigsty
# ./bootstrap               # prepare local repo & ansible
# ./configure -c app/teable # use this teable config template
# vi pigsty.yml             # IMPORTANT: CHANGE CREDENTIALS!!
# ./deploy.yml              # install pigsty & pgsql & minio
# ./docker.yml              # install docker & docker-compose
# ./app.yml                 # install teable with docker-compose
#
# To replace domain name:
#   sed -ie 's/teable.pigsty/teable.pigsty.cc/g' pigsty.yml

all:
  children:

    # the teable application
    teable:
      hosts: { 10.10.10.10: {} }
      vars:
        app: teable   # specify app name to be installed (in the apps)
        apps:         # define all applications
          teable:     # app name, ~/pigsty/app/teable folder
            conf:     # override /opt/teable/.env config file
              # https://github.com/teableio/teable/blob/develop/dockers/examples/standalone/.env
              # https://help.teable.io/en/deploy/env
              POSTGRES_HOST: "10.10.10.10"
              POSTGRES_PORT: "5432"
              POSTGRES_DB: "teable"
              POSTGRES_USER: "dbuser_teable"
              POSTGRES_PASSWORD: "DBUser.Teable"
              PRISMA_DATABASE_URL: "postgresql://dbuser_teable:[email protected]:5432/teable"
              PUBLIC_ORIGIN: "http://tea.pigsty"
              PUBLIC_DATABASE_PROXY: "10.10.10.10:5432"
              TIMEZONE: "UTC"

              # Need to support sending emails to enable the following configurations
              #BACKEND_MAIL_HOST: smtp.teable.io
              #BACKEND_MAIL_PORT: 465
              #BACKEND_MAIL_SECURE: true
              #BACKEND_MAIL_SENDER: noreply.teable.io
              #BACKEND_MAIL_SENDER_NAME: Teable
              #BACKEND_MAIL_AUTH_USER: username
              #BACKEND_MAIL_AUTH_PASS: password


    pg-meta:
      hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - { name: dbuser_teable ,password: DBUser.Teable ,pgbouncer: true ,roles: [ dbrole_admin ] ,superuser: true ,comment: teable superuser }
        pg_databases:
          - { name: teable ,owner: dbuser_teable ,comment: teable database }
        pg_hba_rules:
          - { user: teable ,db: all ,addr: 172.17.0.0/16  ,auth: pwd ,title: 'allow teable access from local docker network' }
        node_crontab: [ '00 01 * * * postgres /pg/bin/pg-backup full' ] # make a full backup every 1am
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }
    etcd:  { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }
    minio: { hosts: { 10.10.10.10: { minio_seq: 1 } }, vars: { minio_cluster: minio } }

  vars:                               # global variables
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default|china|europe
    node_tune: oltp                   # node tuning specs: oltp,olap,tiny,crit
    pg_conf: oltp.yml                 # pgsql tuning specs: {oltp,olap,tiny,crit}.yml

    docker_enabled: true              # enable docker on app group
    #docker_registry_mirrors: ["https://docker.1panel.live","https://docker.1ms.run","https://docker.xuanyuan.me","https://registry-1.docker.io"]

    proxy_env:                        # global proxy env when downloading packages & pull docker images
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.tsinghua.edu.cn"
      #http_proxy:  127.0.0.1:12345 # add your proxy env here for downloading packages or pull images
      #https_proxy: 127.0.0.1:12345 # usually the proxy is format as http://user:[email protected]
      #all_proxy:   127.0.0.1:12345
    infra_portal:                        # domain names and upstream servers
      home   : { domain: i.pigsty }
      #minio : { domain: m.pigsty    ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }

      teable:                            # nginx server config for teable
        domain: tea.pigsty               # REPLACE IT WITH YOUR OWN DOMAIN!
        endpoint: "10.10.10.10:8890"     # teable service endpoint: IP:PORT
        websocket: true                  # add websocket support
        certbot: tea.pigsty              # certbot cert name, apply with `make cert`

    repo_enabled: false
    node_repo_modules: node,infra,pgsql
    pg_version: 18

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

配置解读

app/teable 模板提供了 Teable 开源无代码数据库的一键部署方案。

Teable 是什么

  • 开源的 Airtable 替代品
  • 基于 PostgreSQL 的无代码数据库
  • 支持表格、看板、日历、表单等多种视图
  • 提供 API 和自动化工作流

关键特性

  • 使用 Pigsty 管理的 PostgreSQL 作为底层存储
  • 数据实际存储在真实的 PostgreSQL 表中
  • 支持 SQL 直接查询数据
  • 可与其他 PostgreSQL 工具和扩展集成

访问方式

# Teable Web 界面
http://tea.pigsty:8890

# 或通过 Nginx 代理
https://tea.pigsty

# 同时可以直接 SQL 访问底层数据
psql postgresql://dbuser_teable:[email protected]:5432/teable

适用场景

  • 需要 Airtable 类似功能但希望自建
  • 团队协作数据管理
  • 需要同时支持 API 和 SQL 访问
  • 希望数据存储在真实 PostgreSQL 中

注意事项

  • Teable 用户需要 superuser 权限
  • 需要正确配置 PUBLIC_ORIGIN 为外部访问地址
  • 支持邮件通知(可选配置 SMTP)

29 - app/registry

使用 Pigsty 部署 Docker Registry 镜像代理和私有仓库

app/registry 配置模板提供了部署 Docker Registry 镜像代理的参考配置,可用作 Docker Hub 镜像加速或私有镜像仓库。


配置概览

  • 配置名称: app/registry
  • 节点数量: 单节点
  • 配置说明:部署 Docker Registry 镜像代理和私有仓库
  • 适用系统:el8, el9, el10, d12, d13, u22, u24
  • 适用架构:x86_64, aarch64
  • 相关配置:meta

启用方式:

./configure -c app/registry [-i <primary_ip>]

配置内容

源文件地址:pigsty/conf/app/registry.yml

---
#==============================================================#
# File      :   registry.yml
# Desc      :   pigsty config for running Docker Registry Mirror
# Ctime     :   2025-07-01
# Mtime     :   2025-12-12
# Docs      :   https://doc.pgsty.com/app/registry
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# tutorial: https://doc.pgsty.com/app/registry
# how to use this template:
#
#  curl -fsSL https://repo.pigsty.io/get | bash; cd ~/pigsty
# ./configure -c app/registry   # use this registry config template
# vi pigsty.yml                 # IMPORTANT: CHANGE DOMAIN & CREDENTIALS!
# ./deploy.yml                  # install pigsty
# ./docker.yml                  # install docker & docker-compose
# ./app.yml                     # install registry with docker-compose
#
# To replace domain name:
#   sed -ie 's/registry.pigsty/registry.your-domain.com/g' pigsty.yml

#==============================================================#
# Usage Instructions:
#==============================================================#
#
# 1. Deploy the registry:
#    ./configure -c conf/app/registry.yml && ./deploy.yml && ./docker.yml && ./app.yml
#
# 2. Configure Docker clients to use the mirror:
#    Edit /etc/docker/daemon.json:
#    {
#      "registry-mirrors": ["https://registry.your-domain.com"],
#      "insecure-registries": ["registry.your-domain.com"]
#    }
#
# 3. Restart Docker daemon:
#    sudo systemctl restart docker
#
# 4. Test the registry:
#    docker pull nginx:latest  # This will now use your mirror
#
# 5. Access the web UI (optional):
#    https://registry-ui.your-domain.com
#
# 6. Monitor the registry:
#    curl https://registry.your-domain.com/v2/_catalog
#    curl https://registry.your-domain.com/v2/nginx/tags/list
#
#==============================================================#


all:
  children:

    # the docker registry mirror application
    registry:
      hosts: { 10.10.10.10: {} }
      vars:
        app: registry                    # specify app name to be installed
        apps:                            # define all applications
          registry:
            file:                        # create data directory for registry
              - { path: /data/registry ,state: directory ,mode: 0755 }
            conf:                        # environment variables for registry
              REGISTRY_DATA: /data/registry
              REGISTRY_PORT: 5000
              REGISTRY_UI_PORT: 5080
              REGISTRY_STORAGE_DELETE_ENABLED: true
              REGISTRY_LOG_LEVEL: info
              REGISTRY_PROXY_REMOTEURL: https://registry-1.docker.io
              REGISTRY_PROXY_TTL: 168h

    # basic infrastructure
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }
    etcd:  { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }

  vars:
    #----------------------------------------------#
    # INFRA : https://doc.pgsty.com/infra/param
    #----------------------------------------------#
    version: v4.0.0                      # pigsty version string
    admin_ip: 10.10.10.10                # admin node ip address
    region: default                      # upstream mirror region: default,china,europe
    infra_portal:                        # infra services exposed via portal
      home : { domain: i.pigsty }        # default domain name

      # Docker Registry Mirror service configuration
      registry:                          # nginx server config for registry
        domain: d.pigsty                 # REPLACE IT WITH YOUR OWN DOMAIN!
        endpoint: "10.10.10.10:5000"     # registry service endpoint: IP:PORT
        websocket: false                 # registry doesn't need websocket
        certbot: d.pigsty                # certbot cert name, apply with `make cert`

      # Optional: Registry Web UI
      registry-ui:                       # nginx server config for registry UI
        domain: dui.pigsty               # REPLACE IT WITH YOUR OWN DOMAIN!
        endpoint: "10.10.10.10:5080"     # registry UI endpoint: IP:PORT
        websocket: false                 # UI doesn't need websocket
        certbot: d.pigsty                # certbot cert name for UI

    #----------------------------------------------#
    # NODE : https://doc.pgsty.com/node/param
    #----------------------------------------------#
    repo_enabled: false
    node_repo_modules: node,infra,pgsql
    node_tune: oltp                     # node tuning specs: oltp,olap,tiny,crit

    #----------------------------------------------#
    # PGSQL : https://doc.pgsty.com/pgsql/param
    #----------------------------------------------#
    pg_version: 18                      # Default PostgreSQL Major Version is 18
    pg_conf: oltp.yml                   # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    pg_packages: [ pgsql-main, pgsql-common ]   # pg kernel and common utils
    #pg_extensions: [ pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

配置解读

app/registry 模板提供了 Docker Registry 镜像代理的一键部署方案。

Registry 是什么

  • Docker 官方的镜像仓库实现
  • 可作为 Docker Hub 的拉取代理(Pull-through Cache)
  • 也可作为私有镜像仓库使用
  • 支持镜像缓存和本地存储

关键特性

  • 作为 Docker Hub 的代理缓存,加速镜像拉取
  • 缓存镜像到本地存储 /data/registry
  • 提供 Web UI 界面查看缓存的镜像
  • 支持自定义缓存过期时间

配置 Docker 客户端

# 编辑 /etc/docker/daemon.json
{
  "registry-mirrors": ["https://d.pigsty"],
  "insecure-registries": ["d.pigsty"]
}

# 重启 Docker
sudo systemctl restart docker

访问方式

# Registry API
https://d.pigsty/v2/_catalog

# Web UI
http://dui.pigsty:5080

# 拉取镜像(自动使用代理)
docker pull nginx:latest

适用场景

  • 加速 Docker 镜像拉取(尤其在中国大陆)
  • 减少对外网络依赖
  • 企业内部私有镜像仓库
  • 离线环境镜像分发

注意事项

  • 需要足够的磁盘空间存储缓存镜像
  • 默认缓存 7 天(REGISTRY_PROXY_TTL: 168h
  • 可配置 HTTPS 证书(通过 certbot)

30 - 其他模版

31 - demo/el

Enterprise Linux (RHEL/Rocky/Alma) 专用配置模板

demo/el 配置模板是针对 Enterprise Linux 系列发行版(RHEL、Rocky Linux、Alma Linux、Oracle Linux)优化的配置模板。


配置概览

  • 配置名称: demo/el
  • 节点数量: 单节点
  • 配置说明:Enterprise Linux 专用配置模板
  • 适用系统:el8, el9, el10
  • 适用架构:x86_64, aarch64
  • 相关配置:metademo/debian

启用方式:

./configure -c demo/el [-i <primary_ip>]

配置内容

源文件地址:pigsty/conf/demo/el.yml

---
#==============================================================#
# File      :   el.yml
# Desc      :   Default parameters for EL System in Pigsty
# Ctime     :   2020-05-22
# Mtime     :   2025-12-27
# Docs      :   https://doc.pgsty.com/config
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#


#==============================================================#
#                        Sandbox (4-node)                      #
#==============================================================#
# admin user : vagrant  (nopass ssh & sudo already set)        #
# 1.  meta    :    10.10.10.10     (2 Core | 4GB)    pg-meta   #
# 2.  node-1  :    10.10.10.11     (1 Core | 1GB)    pg-test-1 #
# 3.  node-2  :    10.10.10.12     (1 Core | 1GB)    pg-test-2 #
# 4.  node-3  :    10.10.10.13     (1 Core | 1GB)    pg-test-3 #
# (replace these ip if your 4-node env have different ip addr) #
# VIP 2: (l2 vip is available inside same LAN )                #
#     pg-meta --->  10.10.10.2 ---> 10.10.10.10                #
#     pg-test --->  10.10.10.3 ---> 10.10.10.1{1,2,3}          #
#==============================================================#


all:

  ##################################################################
  #                            CLUSTERS                            #
  ##################################################################
  # meta nodes, nodes, pgsql, redis, pgsql clusters are defined as
  # k:v pair inside `all.children`. Where the key is cluster name
  # and value is cluster definition consist of two parts:
  # `hosts`: cluster members ip and instance level variables
  # `vars` : cluster level variables
  ##################################################################
  children:                                 # groups definition

    # infra cluster for proxy, monitor, alert, etc..
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }

    # etcd cluster for ha postgres
    etcd: { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }

    # minio cluster, s3 compatible object storage
    minio: { hosts: { 10.10.10.10: { minio_seq: 1 } }, vars: { minio_cluster: minio } }

    #----------------------------------#
    # pgsql cluster: pg-meta (CMDB)    #
    #----------------------------------#
    pg-meta:
      hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary , pg_offline_query: true } }
      vars:
        pg_cluster: pg-meta

        # define business databases here: https://doc.pgsty.com/pgsql/db
        pg_databases:                       # define business databases on this cluster, array of database definition
          - name: meta                      # REQUIRED, `name` is the only mandatory field of a database definition
            #state: create                  # optional, create|absent|recreate, create by default
            baseline: cmdb.sql              # optional, database sql baseline path, (relative path among ansible search path, e.g: files/)
            schemas: [pigsty]               # optional, additional schemas to be created, array of schema names
            extensions:                     # optional, additional extensions to be installed: array of `{name[,schema]}`
              - { name: vector }            # install pgvector extension on this database by default
            comment: pigsty meta database   # optional, comment string for this database
            #pgbouncer: true                # optional, add this database to pgbouncer database list? true by default
            #owner: postgres                # optional, database owner, current user if not specified
            #template: template1            # optional, which template to use, template1 by default
            #strategy: FILE_COPY            # optional, clone strategy: FILE_COPY or WAL_LOG (PG15+), default to PG's default
            #encoding: UTF8                 # optional, inherited from template / cluster if not defined (UTF8)
            #locale: C                      # optional, inherited from template / cluster if not defined (C)
            #lc_collate: C                  # optional, inherited from template / cluster if not defined (C)
            #lc_ctype: C                    # optional, inherited from template / cluster if not defined (C)
            #locale_provider: libc          # optional, locale provider: libc, icu, builtin (PG15+)
            #icu_locale: en-US              # optional, icu locale for icu locale provider (PG15+)
            #icu_rules: ''                  # optional, icu rules for icu locale provider (PG16+)
            #builtin_locale: C.UTF-8        # optional, builtin locale for builtin locale provider (PG17+)
            #tablespace: pg_default         # optional, default tablespace, pg_default by default
            #is_template: false             # optional, mark database as template, allowing clone by any user with CREATEDB privilege
            #allowconn: true                # optional, allow connection, true by default. false will disable connect at all
            #revokeconn: false              # optional, revoke public connection privilege. false by default. (leave connect with grant option to owner)
            #register_datasource: true      # optional, register this database to grafana datasources? true by default
            #connlimit: -1                  # optional, database connection limit, default -1 disable limit
            #pool_auth_user: dbuser_meta    # optional, all connection to this pgbouncer database will be authenticated by this user
            #pool_mode: transaction         # optional, pgbouncer pool mode at database level, default transaction
            #pool_size: 64                  # optional, pgbouncer pool size at database level, default 64
            #pool_size_reserve: 32          # optional, pgbouncer pool size reserve at database level, default 32
            #pool_size_min: 0               # optional, pgbouncer pool size min at database level, default 0
            #pool_max_db_conn: 100          # optional, max database connections at database level, default 100
          #- { name: grafana  ,owner: dbuser_grafana  ,revokeconn: true ,comment: grafana primary database }
          #- { name: bytebase ,owner: dbuser_bytebase ,revokeconn: true ,comment: bytebase primary database }
          #- { name: kong     ,owner: dbuser_kong     ,revokeconn: true ,comment: kong the api gateway database }
          #- { name: gitea    ,owner: dbuser_gitea    ,revokeconn: true ,comment: gitea meta database }
          #- { name: wiki     ,owner: dbuser_wiki     ,revokeconn: true ,comment: wiki meta database }

        # define business users here: https://doc.pgsty.com/pgsql/user
        pg_users:                           # define business users/roles on this cluster, array of user definition
          - name: dbuser_meta               # REQUIRED, `name` is the only mandatory field of a user definition
            password: DBUser.Meta           # optional, password, can be a scram-sha-256 hash string or plain text
            #login: true                     # optional, can log in, true by default  (new biz ROLE should be false)
            #superuser: false                # optional, is superuser? false by default
            #createdb: false                 # optional, can create database? false by default
            #createrole: false               # optional, can create role? false by default
            #inherit: true                   # optional, can this role use inherited privileges? true by default
            #replication: false              # optional, can this role do replication? false by default
            #bypassrls: false                # optional, can this role bypass row level security? false by default
            #pgbouncer: true                 # optional, add this user to pgbouncer user-list? false by default (production user should be true explicitly)
            #connlimit: -1                   # optional, user connection limit, default -1 disable limit
            #expire_in: 3650                 # optional, now + n days when this role is expired (OVERWRITE expire_at)
            #expire_at: '2030-12-31'         # optional, YYYY-MM-DD 'timestamp' when this role is expired  (OVERWRITTEN by expire_in)
            #comment: pigsty admin user      # optional, comment string for this user/role
            #roles: [dbrole_admin]           # optional, belonged roles. default roles are: dbrole_{admin,readonly,readwrite,offline}
            #parameters: {}                  # optional, role level parameters with `ALTER ROLE SET`
            #pool_mode: transaction          # optional, pgbouncer pool mode at user level, transaction by default
            #pool_connlimit: -1              # optional, max database connections at user level, default -1 disable limit
          - {name: dbuser_view     ,password: DBUser.Viewer   ,pgbouncer: true ,roles: [dbrole_readonly], comment: read-only viewer for meta database}
          #- {name: dbuser_grafana  ,password: DBUser.Grafana  ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for grafana database   }
          #- {name: dbuser_bytebase ,password: DBUser.Bytebase ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for bytebase database  }
          #- {name: dbuser_gitea    ,password: DBUser.Gitea    ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for gitea service      }
          #- {name: dbuser_wiki     ,password: DBUser.Wiki     ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for wiki.js service    }

        # define business service here: https://doc.pgsty.com/pgsql/service
        pg_services:                        # extra services in addition to pg_default_services, array of service definition
          # standby service will route {ip|name}:5435 to sync replica's pgbouncer (5435->6432 standby)
          - name: standby                   # required, service name, the actual svc name will be prefixed with `pg_cluster`, e.g: pg-meta-standby
            port: 5435                      # required, service exposed port (work as kubernetes service node port mode)
            ip: "*"                         # optional, service bind ip address, `*` for all ip by default
            selector: "[]"                  # required, service member selector, use JMESPath to filter inventory
            dest: default                   # optional, destination port, default|postgres|pgbouncer|<port_number>, 'default' by default
            check: /sync                    # optional, health check url path, / by default
            backup: "[? pg_role == `primary`]"  # backup server selector
            maxconn: 3000                   # optional, max allowed front-end connection
            balance: roundrobin             # optional, haproxy load balance algorithm (roundrobin by default, other: leastconn)
            options: 'inter 3s fastinter 1s downinter 5s rise 3 fall 3 on-marked-down shutdown-sessions slowstart 30s maxconn 3000 maxqueue 128 weight 100'

        # define pg extensions: https://doc.pgsty.com/pgsql/extension
        pg_libs: 'pg_stat_statements, auto_explain' # add timescaledb to shared_preload_libraries
        #pg_extensions: [] # extensions to be installed on this cluster

        # define HBA rules here: https://doc.pgsty.com/pgsql/hba
        pg_hba_rules:
          - {user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes'}

        pg_vip_enabled: true
        pg_vip_address: 10.10.10.2/24
        pg_vip_interface: eth1

        node_crontab:  # make a full backup 1 am everyday
          - '00 01 * * * postgres /pg/bin/pg-backup full'

    #----------------------------------#
    # pgsql cluster: pg-test (3 nodes) #
    #----------------------------------#
    # pg-test --->  10.10.10.3 ---> 10.10.10.1{1,2,3}
    pg-test:                          # define the new 3-node cluster pg-test
      hosts:
        10.10.10.11: { pg_seq: 1, pg_role: primary }   # primary instance, leader of cluster
        10.10.10.12: { pg_seq: 2, pg_role: replica }   # replica instance, follower of leader
        10.10.10.13: { pg_seq: 3, pg_role: replica, pg_offline_query: true } # replica with offline access
      vars:
        pg_cluster: pg-test           # define pgsql cluster name
        pg_users:  [{ name: test , password: test , pgbouncer: true , roles: [ dbrole_admin ] }]
        pg_databases: [{ name: test }] # create a database and user named 'test'
        node_tune: tiny
        pg_conf: tiny.yml
        pg_vip_enabled: true
        pg_vip_address: 10.10.10.3/24
        pg_vip_interface: eth1
        node_crontab:  # make a full backup on monday 1am, and an incremental backup during weekdays
          - '00 01 * * 1 postgres /pg/bin/pg-backup full'
          - '00 01 * * 2,3,4,5,6,7 postgres /pg/bin/pg-backup'

    #----------------------------------#
    # redis ms, sentinel, native cluster
    #----------------------------------#
    redis-ms: # redis classic primary & replica
      hosts: { 10.10.10.10: { redis_node: 1 , redis_instances: { 6379: { }, 6380: { replica_of: '10.10.10.10 6379' } } } }
      vars: { redis_cluster: redis-ms ,redis_password: 'redis.ms' ,redis_max_memory: 64MB }

    redis-meta: # redis sentinel x 3
      hosts: { 10.10.10.11: { redis_node: 1 , redis_instances: { 26379: { } ,26380: { } ,26381: { } } } }
      vars:
        redis_cluster: redis-meta
        redis_password: 'redis.meta'
        redis_mode: sentinel
        redis_max_memory: 16MB
        redis_sentinel_monitor: # primary list for redis sentinel, use cls as name, primary ip:port
          - { name: redis-ms, host: 10.10.10.10, port: 6379 ,password: redis.ms, quorum: 2 }

    redis-test: # redis native cluster: 3m x 3s
      hosts:
        10.10.10.12: { redis_node: 1 ,redis_instances: { 6379: { } ,6380: { } ,6381: { } } }
        10.10.10.13: { redis_node: 2 ,redis_instances: { 6379: { } ,6380: { } ,6381: { } } }
      vars: { redis_cluster: redis-test ,redis_password: 'redis.test' ,redis_mode: cluster, redis_max_memory: 32MB }


  ####################################################################
  #                             VARS                                 #
  ####################################################################
  vars:                               # global variables


    #================================================================#
    #                         VARS: INFRA                            #
    #================================================================#

    #-----------------------------------------------------------------
    # META
    #-----------------------------------------------------------------
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default,china,europe
    language: en                      # default language: en, zh
    proxy_env:                        # global proxy env when downloading packages
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.myqcloud.com,*.tsinghua.edu.cn"
      # http_proxy:  # set your proxy here: e.g http://user:[email protected]
      # https_proxy: # set your proxy here: e.g http://user:[email protected]
      # all_proxy:   # set your proxy here: e.g http://user:[email protected]

    #-----------------------------------------------------------------
    # CA
    #-----------------------------------------------------------------
    ca_create: true                   # create ca if not exists? or just abort
    ca_cn: pigsty-ca                  # ca common name, fixed as pigsty-ca
    cert_validity: 7300d              # cert validity, 20 years by default

    #-----------------------------------------------------------------
    # INFRA_IDENTITY
    #-----------------------------------------------------------------
    #infra_seq: 1                     # infra node identity, explicitly required
    infra_portal:                     # infra services exposed via portal
      home : { domain: i.pigsty }     # default domain name
    infra_data: /data/infra           # default data path for infrastructure data

    #-----------------------------------------------------------------
    # REPO
    #-----------------------------------------------------------------
    repo_enabled: true                # create a yum repo on this infra node?
    repo_home: /www                   # repo home dir, `/www` by default
    repo_name: pigsty                 # repo name, pigsty by default
    repo_endpoint: http://${admin_ip}:80 # access point to this repo by domain or ip:port
    repo_remove: true                 # remove existing upstream repo
    repo_modules: infra,node,pgsql    # which repo modules are installed in repo_upstream
    repo_upstream:                    # where to download
      - { name: pigsty-local   ,description: 'Pigsty Local'       ,module: local   ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://${admin_ip}/pigsty'  }} # used by intranet nodes
      - { name: pigsty-infra   ,description: 'Pigsty INFRA'       ,module: infra   ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://repo.pigsty.io/yum/infra/$basearch' ,china: 'https://repo.pigsty.cc/yum/infra/$basearch' }}
      - { name: pigsty-pgsql   ,description: 'Pigsty PGSQL'       ,module: pgsql   ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://repo.pigsty.io/yum/pgsql/el$releasever.$basearch' ,china: 'https://repo.pigsty.cc/yum/pgsql/el$releasever.$basearch' }}
      - { name: nginx          ,description: 'Nginx Repo'         ,module: infra   ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://nginx.org/packages/rhel/$releasever/$basearch/' }}
      - { name: docker-ce      ,description: 'Docker CE'          ,module: infra   ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.docker.com/linux/centos/$releasever/$basearch/stable'    ,china: 'https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/$basearch/stable' ,europe: 'https://mirrors.xtom.de/docker-ce/linux/centos/$releasever/$basearch/stable' }}
      - { name: baseos         ,description: 'EL 8+ BaseOS'       ,module: node    ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://dl.rockylinux.org/pub/rocky/$releasever/BaseOS/$basearch/os/'     ,china: 'https://mirrors.aliyun.com/rockylinux/$releasever/BaseOS/$basearch/os/'         ,europe: 'https://mirrors.xtom.de/rocky/$releasever/BaseOS/$basearch/os/'     }}
      - { name: appstream      ,description: 'EL 8+ AppStream'    ,module: node    ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://dl.rockylinux.org/pub/rocky/$releasever/AppStream/$basearch/os/'  ,china: 'https://mirrors.aliyun.com/rockylinux/$releasever/AppStream/$basearch/os/'      ,europe: 'https://mirrors.xtom.de/rocky/$releasever/AppStream/$basearch/os/'  }}
      - { name: extras         ,description: 'EL 8+ Extras'       ,module: node    ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://dl.rockylinux.org/pub/rocky/$releasever/extras/$basearch/os/'     ,china: 'https://mirrors.aliyun.com/rockylinux/$releasever/extras/$basearch/os/'         ,europe: 'https://mirrors.xtom.de/rocky/$releasever/extras/$basearch/os/'     }}
      - { name: powertools     ,description: 'EL 8 PowerTools'    ,module: node    ,releases: [8     ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://dl.rockylinux.org/pub/rocky/$releasever/PowerTools/$basearch/os/' ,china: 'https://mirrors.aliyun.com/rockylinux/$releasever/PowerTools/$basearch/os/'     ,europe: 'https://mirrors.xtom.de/rocky/$releasever/PowerTools/$basearch/os/' }}
      - { name: crb            ,description: 'EL 9 CRB'           ,module: node    ,releases: [  9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://dl.rockylinux.org/pub/rocky/$releasever/CRB/$basearch/os/'        ,china: 'https://mirrors.aliyun.com/rockylinux/$releasever/CRB/$basearch/os/'            ,europe: 'https://mirrors.xtom.de/rocky/$releasever/CRB/$basearch/os/'        }}
      - { name: epel           ,description: 'EL 8+ EPEL'         ,module: node    ,releases: [8,9   ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://mirrors.edge.kernel.org/fedora-epel/$releasever/Everything/$basearch/' ,china: 'https://mirrors.aliyun.com/epel/$releasever/Everything/$basearch/'         ,europe: 'https://mirrors.xtom.de/epel/$releasever/Everything/$basearch/'     }}
      - { name: epel           ,description: 'EL 10 EPEL'         ,module: node    ,releases: [    10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://mirrors.edge.kernel.org/fedora-epel/$releasever.0/Everything/$basearch/' ,china: 'https://mirrors.aliyun.com/epel/$releasever.0/Everything/$basearch/'     ,europe: 'https://mirrors.xtom.de/epel/$releasever.0/Everything/$basearch/'   }}
      - { name: pgdg-common    ,description: 'PostgreSQL Common'  ,module: pgsql   ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/common/redhat/rhel-$releasever-$basearch'          ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/common/redhat/rhel-$releasever-$basearch'          ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/common/redhat/rhel-$releasever-$basearch' }}
      - { name: pgdg-el8fix    ,description: 'PostgreSQL EL8FIX'  ,module: pgsql   ,releases: [8     ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/common/pgdg-centos8-sysupdates/redhat/rhel-8-$basearch/'  ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/common/pgdg-centos8-sysupdates/redhat/rhel-8-$basearch/'  ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/common/pgdg-centos8-sysupdates/redhat/rhel-8-$basearch/'  }}
      - { name: pgdg-el9fix    ,description: 'PostgreSQL EL9FIX'  ,module: pgsql   ,releases: [  9   ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/common/pgdg-rocky9-sysupdates/redhat/rhel-9-$basearch/'   ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/common/pgdg-rocky9-sysupdates/redhat/rhel-9-$basearch/'   ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/common/pgdg-rocky9-sysupdates/redhat/rhel-9-$basearch/'   }}
      - { name: pgdg-el10fix   ,description: 'PostgreSQL EL10FIX' ,module: pgsql   ,releases: [    10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/common/pgdg-rocky10-sysupdates/redhat/rhel-10-$basearch/' ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/common/pgdg-rocky10-sysupdates/redhat/rhel-10-$basearch/' ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/common/pgdg-rocky10-sysupdates/redhat/rhel-10-$basearch/' }}
      - { name: pgdg13         ,description: 'PostgreSQL 13'      ,module: pgsql   ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/13/redhat/rhel-$releasever-$basearch'          ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/13/redhat/rhel-$releasever-$basearch'          ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/13/redhat/rhel-$releasever-$basearch' }}
      - { name: pgdg14         ,description: 'PostgreSQL 14'      ,module: pgsql   ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/14/redhat/rhel-$releasever-$basearch'          ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/14/redhat/rhel-$releasever-$basearch'          ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/14/redhat/rhel-$releasever-$basearch' }}
      - { name: pgdg15         ,description: 'PostgreSQL 15'      ,module: pgsql   ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/15/redhat/rhel-$releasever-$basearch'          ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/15/redhat/rhel-$releasever-$basearch'          ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/15/redhat/rhel-$releasever-$basearch' }}
      - { name: pgdg16         ,description: 'PostgreSQL 16'      ,module: pgsql   ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/16/redhat/rhel-$releasever-$basearch'          ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/16/redhat/rhel-$releasever-$basearch'          ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/16/redhat/rhel-$releasever-$basearch' }}
      - { name: pgdg17         ,description: 'PostgreSQL 17'      ,module: pgsql   ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/17/redhat/rhel-$releasever-$basearch'          ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/17/redhat/rhel-$releasever-$basearch'          ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/17/redhat/rhel-$releasever-$basearch' }}
      - { name: pgdg18         ,description: 'PostgreSQL 18'      ,module: pgsql   ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/18/redhat/rhel-$releasever-$basearch'          ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/18/redhat/rhel-$releasever-$basearch'          ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/18/redhat/rhel-$releasever-$basearch' }}
      - { name: pgdg-beta      ,description: 'PostgreSQL Testing' ,module: beta    ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/testing/19/redhat/rhel-$releasever-$basearch'  ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/testing/19/redhat/rhel-$releasever-$basearch'  ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/testing/19/redhat/rhel-$releasever-$basearch'  }}
      - { name: pgdg-extras    ,description: 'PostgreSQL Extra'   ,module: extra   ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/extras/redhat/rhel-$releasever-$basearch'      ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/extras/redhat/rhel-$releasever-$basearch'      ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/extras/redhat/rhel-$releasever-$basearch'      }}
      - { name: pgdg13-nonfree ,description: 'PostgreSQL 13+'     ,module: extra   ,releases: [8,9,10] ,arch: [x86_64         ] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/non-free/13/redhat/rhel-$releasever-$basearch' ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/non-free/13/redhat/rhel-$releasever-$basearch' ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/non-free/13/redhat/rhel-$releasever-$basearch' }}
      - { name: pgdg14-nonfree ,description: 'PostgreSQL 14+'     ,module: extra   ,releases: [8,9,10] ,arch: [x86_64         ] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/non-free/14/redhat/rhel-$releasever-$basearch' ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/non-free/14/redhat/rhel-$releasever-$basearch' ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/non-free/14/redhat/rhel-$releasever-$basearch' }}
      - { name: pgdg15-nonfree ,description: 'PostgreSQL 15+'     ,module: extra   ,releases: [8,9,10] ,arch: [x86_64         ] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/non-free/15/redhat/rhel-$releasever-$basearch' ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/non-free/15/redhat/rhel-$releasever-$basearch' ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/non-free/15/redhat/rhel-$releasever-$basearch' }}
      - { name: pgdg16-nonfree ,description: 'PostgreSQL 16+'     ,module: extra   ,releases: [8,9,10] ,arch: [x86_64         ] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/non-free/16/redhat/rhel-$releasever-$basearch' ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/non-free/16/redhat/rhel-$releasever-$basearch' ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/non-free/16/redhat/rhel-$releasever-$basearch' }}
      - { name: pgdg17-nonfree ,description: 'PostgreSQL 17+'     ,module: extra   ,releases: [8,9,10] ,arch: [x86_64         ] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/non-free/17/redhat/rhel-$releasever-$basearch' ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/non-free/17/redhat/rhel-$releasever-$basearch' ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/non-free/17/redhat/rhel-$releasever-$basearch' }}
      - { name: pgdg18-nonfree ,description: 'PostgreSQL 18+'     ,module: extra   ,releases: [8,9,10] ,arch: [x86_64         ] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/non-free/18/redhat/rhel-$releasever-$basearch' ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/non-free/18/redhat/rhel-$releasever-$basearch' ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/non-free/18/redhat/rhel-$releasever-$basearch' }}
      - { name: timescaledb    ,description: 'TimescaleDB'        ,module: extra   ,releases: [8,9   ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://packagecloud.io/timescale/timescaledb/el/$releasever/$basearch'  }}
      - { name: percona        ,description: 'Percona TDE'        ,module: percona ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://repo.pigsty.io/yum/percona/el$releasever.$basearch' ,china: 'https://repo.pigsty.cc/yum/percona/el$releasever.$basearch' ,origin: 'http://repo.percona.com/ppg-18.1/yum/release/$releasever/RPMS/$basearch'  }}
      - { name: wiltondb       ,description: 'WiltonDB'           ,module: mssql   ,releases: [8,9   ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://repo.pigsty.io/yum/mssql/el$releasever.$basearch', china: 'https://repo.pigsty.cc/yum/mssql/el$releasever.$basearch' , origin: 'https://download.copr.fedorainfracloud.org/results/wiltondb/wiltondb/epel-$releasever-$basearch/' }}
      - { name: groonga        ,description: 'Groonga'            ,module: groonga ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://packages.groonga.org/almalinux/$releasever/$basearch/' }}
      - { name: mysql          ,description: 'MySQL'              ,module: mysql   ,releases: [8,9   ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://repo.mysql.com/yum/mysql-8.4-community/el/$releasever/$basearch/' }}
      - { name: mongo          ,description: 'MongoDB'            ,module: mongo   ,releases: [8,9   ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/8.0/$basearch/' ,china: 'https://mirrors.aliyun.com/mongodb/yum/redhat/$releasever/mongodb-org/8.0/$basearch/' }}
      - { name: redis          ,description: 'Redis'              ,module: redis   ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://rpmfind.net/linux/remi/enterprise/$releasever/redis72/$basearch/' }}
      - { name: grafana        ,description: 'Grafana'            ,module: grafana ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://rpm.grafana.com', china: 'https://mirrors.aliyun.com/grafana/yum/' }}
      - { name: kubernetes     ,description: 'Kubernetes'         ,module: kube    ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://pkgs.k8s.io/core:/stable:/v1.33/rpm/', china: 'https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.33/rpm/' }}
      - { name: gitlab-ee      ,description: 'Gitlab EE'          ,module: gitlab  ,releases: [8,9   ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://packages.gitlab.com/gitlab/gitlab-ee/el/$releasever/$basearch' }}
      - { name: gitlab-ce      ,description: 'Gitlab CE'          ,module: gitlab  ,releases: [8,9   ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://packages.gitlab.com/gitlab/gitlab-ce/el/$releasever/$basearch' }}
      - { name: clickhouse     ,description: 'ClickHouse'         ,module: click   ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://packages.clickhouse.com/rpm/stable/', china: 'https://mirrors.aliyun.com/clickhouse/rpm/stable/' }}

    repo_packages: [ node-bootstrap, infra-package, infra-addons, node-package1, node-package2, pgsql-utility, extra-modules ]
    repo_extra_packages: [ pgsql-main ]
    repo_url_packages: []

    #-----------------------------------------------------------------
    # INFRA_PACKAGE
    #-----------------------------------------------------------------
    infra_packages:                   # packages to be installed on infra nodes
      - grafana,grafana-plugins,grafana-victorialogs-ds,grafana-victoriametrics-ds,victoria-metrics,victoria-logs,victoria-traces,vmutils,vlogscli,alertmanager
      - node_exporter,blackbox_exporter,nginx_exporter,pg_exporter,pev2,nginx,dnsmasq,ansible,etcd,python3-requests,redis,mcli,restic,certbot,python3-certbot-nginx
    infra_packages_pip: ''            # pip installed packages for infra nodes

    #-----------------------------------------------------------------
    # NGINX
    #-----------------------------------------------------------------
    nginx_enabled: true               # enable nginx on this infra node?
    nginx_clean: false                # clean existing nginx config during init?
    nginx_exporter_enabled: true      # enable nginx_exporter on this infra node?
    nginx_exporter_port: 9113         # nginx_exporter listen port, 9113 by default
    nginx_sslmode: enable             # nginx ssl mode? disable,enable,enforce
    nginx_cert_validity: 397d         # nginx self-signed cert validity, 397d by default
    nginx_home: /www                  # nginx content dir, `/www` by default (soft link to nginx_data)
    nginx_data: /data/nginx           # nginx actual data dir, /data/nginx by default
    nginx_users: { admin : pigsty }   # nginx basic auth users: name and pass dict
    nginx_port: 80                    # nginx listen port, 80 by default
    nginx_ssl_port: 443               # nginx ssl listen port, 443 by default
    certbot_sign: false               # sign nginx cert with certbot during setup?
    certbot_email: [email protected]     # certbot email address, used for free ssl
    certbot_options: ''               # certbot extra options

    #-----------------------------------------------------------------
    # DNS
    #-----------------------------------------------------------------
    dns_enabled: true                 # setup dnsmasq on this infra node?
    dns_port: 53                      # dns server listen port, 53 by default
    dns_records:                      # dynamic dns records resolved by dnsmasq
      - "${admin_ip} i.pigsty"
      - "${admin_ip} m.pigsty supa.pigsty api.pigsty adm.pigsty cli.pigsty ddl.pigsty"

    #-----------------------------------------------------------------
    # VICTORIA
    #-----------------------------------------------------------------
    vmetrics_enabled: true            # enable victoria-metrics on this infra node?
    vmetrics_clean: false             # whether clean existing victoria metrics data during init?
    vmetrics_port: 8428               # victoria-metrics listen port, 8428 by default
    vmetrics_scrape_interval: 10s     # victoria global scrape interval, 10s by default
    vmetrics_scrape_timeout: 8s       # victoria global scrape timeout, 8s by default
    vmetrics_options: >-
      -retentionPeriod=15d
      -promscrape.fileSDCheckInterval=5s
    vlogs_enabled: true               # enable victoria-logs on this infra node?
    vlogs_clean: false                # clean victoria-logs data during init?
    vlogs_port: 9428                  # victoria-logs listen port, 9428 by default
    vlogs_options: >-
      -retentionPeriod=15d
      -retention.maxDiskSpaceUsageBytes=50GiB
      -insert.maxLineSizeBytes=1MB
      -search.maxQueryDuration=120s
    vtraces_enabled: true             # enable victoria-traces on this infra node?
    vtraces_clean: false                # clean victoria-trace data during inti?
    vtraces_port: 10428               # victoria-traces listen port, 10428 by default
    vtraces_options: >-
      -retentionPeriod=15d
      -retention.maxDiskSpaceUsageBytes=50GiB
    vmalert_enabled: true             # enable vmalert on this infra node?
    vmalert_port: 8880                # vmalert listen port, 8880 by default
    vmalert_options: ''              # vmalert extra server options

    #-----------------------------------------------------------------
    # PROMETHEUS
    #-----------------------------------------------------------------
    blackbox_enabled: true            # setup blackbox_exporter on this infra node?
    blackbox_port: 9115               # blackbox_exporter listen port, 9115 by default
    blackbox_options: ''              # blackbox_exporter extra server options
    alertmanager_enabled: true        # setup alertmanager on this infra node?
    alertmanager_port: 9059           # alertmanager listen port, 9059 by default
    alertmanager_options: ''          # alertmanager extra server options
    exporter_metrics_path: /metrics   # exporter metric path, `/metrics` by default

    #-----------------------------------------------------------------
    # GRAFANA
    #-----------------------------------------------------------------
    grafana_enabled: true             # enable grafana on this infra node?
    grafana_port: 3000                # default listen port for grafana
    grafana_clean: false              # clean grafana data during init?
    grafana_admin_username: admin     # grafana admin username, `admin` by default
    grafana_admin_password: pigsty    # grafana admin password, `pigsty` by default
    grafana_auth_proxy: false         # enable grafana auth proxy?
    grafana_pgurl: ''                 # external postgres database url for grafana if given
    grafana_view_password: DBUser.Viewer # password for grafana meta pg datasource


    #================================================================#
    #                         VARS: NODE                             #
    #================================================================#

    #-----------------------------------------------------------------
    # NODE_IDENTITY
    #-----------------------------------------------------------------
    #nodename:           # [INSTANCE] # node instance identity, use hostname if missing, optional
    node_cluster: nodes   # [CLUSTER] # node cluster identity, use 'nodes' if missing, optional
    nodename_overwrite: true          # overwrite node's hostname with nodename?
    nodename_exchange: false          # exchange nodename among play hosts?
    node_id_from_pg: true             # use postgres identity as node identity if applicable?

    #-----------------------------------------------------------------
    # NODE_DNS
    #-----------------------------------------------------------------
    node_write_etc_hosts: true        # modify `/etc/hosts` on target node?
    node_default_etc_hosts:           # static dns records in `/etc/hosts`
      - "${admin_ip} i.pigsty"
    node_etc_hosts: []                # extra static dns records in `/etc/hosts`
    node_dns_method: add              # how to handle dns servers: add,none,overwrite
    node_dns_servers: ['${admin_ip}'] # dynamic nameserver in `/etc/resolv.conf`
    node_dns_options:                 # dns resolv options in `/etc/resolv.conf`
      - options single-request-reopen timeout:1

    #-----------------------------------------------------------------
    # NODE_PACKAGE
    #-----------------------------------------------------------------
    node_repo_modules: local          # upstream repo to be added on node, local by default
    node_repo_remove: true            # remove existing repo on node?
    node_packages: [openssh-server]   # packages to be installed current nodes with latest version
    node_default_packages:            # default packages to be installed on all nodes
      - lz4,unzip,bzip2,pv,jq,git,ncdu,make,patch,bash,lsof,wget,uuid,tuned,nvme-cli,numactl,sysstat,iotop,htop,rsync,tcpdump
      - python3,python3-pip,socat,lrzsz,net-tools,ipvsadm,telnet,ca-certificates,openssl,keepalived,etcd,haproxy,chrony,pig
      - zlib,yum,audit,bind-utils,readline,vim-minimal,node_exporter,grubby,openssh-server,openssh-clients,chkconfig,vector

    #-----------------------------------------------------------------
    # NODE_SEC
    #-----------------------------------------------------------------
    node_selinux_mode: permissive     # set selinux mode: enforcing,permissive,disabled
    node_firewall_mode: zone          # firewall mode: off, none, zone, zone by default
    node_firewall_intranet:           # which intranet cidr considered as internal network
      - 10.0.0.0/8
      - 192.168.0.0/16
      - 172.16.0.0/12
    node_firewall_public_port:        # expose these ports to public network in (zone, strict) mode
      - 22                            # enable ssh access
      - 80                            # enable http access
      - 443                           # enable https access
      - 5432                          # enable postgresql access (think twice before exposing it!)

    #-----------------------------------------------------------------
    # NODE_TUNE
    #-----------------------------------------------------------------
    node_disable_numa: false          # disable node numa, reboot required
    node_disable_swap: false          # disable node swap, use with caution
    node_static_network: true         # preserve dns resolver settings after reboot
    node_disk_prefetch: false         # setup disk prefetch on HDD to increase performance
    node_kernel_modules: [ softdog, ip_vs, ip_vs_rr, ip_vs_wrr, ip_vs_sh ]
    node_hugepage_count: 0            # number of 2MB hugepage, take precedence over ratio
    node_hugepage_ratio: 0            # node mem hugepage ratio, 0 disable it by default
    node_overcommit_ratio: 0          # node mem overcommit ratio, 0 disable it by default
    node_tune: oltp                   # node tuned profile: none,oltp,olap,crit,tiny
    node_sysctl_params: { }           # sysctl parameters in k:v format in addition to tuned

    #-----------------------------------------------------------------
    # NODE_ADMIN
    #-----------------------------------------------------------------
    node_data: /data                  # node main data directory, `/data` by default
    node_admin_enabled: true          # create a admin user on target node?
    node_admin_uid: 88                # uid and gid for node admin user
    node_admin_username: dba          # name of node admin user, `dba` by default
    node_admin_sudo: nopass           # admin sudo privilege, all,nopass. nopass by default
    node_admin_ssh_exchange: true     # exchange admin ssh key among node cluster
    node_admin_pk_current: true       # add current user's ssh pk to admin authorized_keys
    node_admin_pk_list: []            # ssh public keys to be added to admin user
    node_aliases: {}                  # extra shell aliases to be added, k:v dict

    #-----------------------------------------------------------------
    # NODE_TIME
    #-----------------------------------------------------------------
    node_timezone: ''                 # setup node timezone, empty string to skip
    node_ntp_enabled: true            # enable chronyd time sync service?
    node_ntp_servers:                 # ntp servers in `/etc/chrony.conf`
      - pool pool.ntp.org iburst
    node_crontab_overwrite: true      # overwrite or append to `/etc/crontab`?
    node_crontab: [ ]                 # crontab entries in `/etc/crontab`

    #-----------------------------------------------------------------
    # NODE_VIP
    #-----------------------------------------------------------------
    vip_enabled: false                # enable vip on this node cluster?
    # vip_address:         [IDENTITY] # node vip address in ipv4 format, required if vip is enabled
    # vip_vrid:            [IDENTITY] # required, integer, 1-254, should be unique among same VLAN
    vip_role: backup                  # optional, `master|backup`, backup by default, use as init role
    vip_preempt: false                # optional, `true/false`, false by default, enable vip preemption
    vip_interface: eth0               # node vip network interface to listen, `eth0` by default
    vip_dns_suffix: ''                # node vip dns name suffix, empty string by default
    vip_exporter_port: 9650           # keepalived exporter listen port, 9650 by default

    #-----------------------------------------------------------------
    # HAPROXY
    #-----------------------------------------------------------------
    haproxy_enabled: true             # enable haproxy on this node?
    haproxy_clean: false              # cleanup all existing haproxy config?
    haproxy_reload: true              # reload haproxy after config?
    haproxy_auth_enabled: true        # enable authentication for haproxy admin page
    haproxy_admin_username: admin     # haproxy admin username, `admin` by default
    haproxy_admin_password: pigsty    # haproxy admin password, `pigsty` by default
    haproxy_exporter_port: 9101       # haproxy admin/exporter port, 9101 by default
    haproxy_client_timeout: 24h       # client side connection timeout, 24h by default
    haproxy_server_timeout: 24h       # server side connection timeout, 24h by default
    haproxy_services: []              # list of haproxy service to be exposed on node

    #-----------------------------------------------------------------
    # NODE_EXPORTER
    #-----------------------------------------------------------------
    node_exporter_enabled: true       # setup node_exporter on this node?
    node_exporter_port: 9100          # node exporter listen port, 9100 by default
    node_exporter_options: '--no-collector.softnet --no-collector.nvme --collector.tcpstat --collector.processes'

    #-----------------------------------------------------------------
    # VECTOR
    #-----------------------------------------------------------------
    vector_enabled: true              # enable vector log collector?
    vector_clean: false               # purge vector data dir during init?
    vector_data: /data/vector         # vector data dir, /data/vector by default
    vector_port: 9598                 # vector metrics port, 9598 by default
    vector_read_from: beginning       # vector read from beginning or end
    vector_log_endpoint: [ infra ]    # if defined, sending vector log to this endpoint.


    #================================================================#
    #                        VARS: DOCKER                            #
    #================================================================#
    docker_enabled: false             # enable docker on this node?
    docker_data: /data/docker         # docker data directory, /data/docker by default
    docker_storage_driver: overlay2   # docker storage driver, can be zfs, btrfs
    docker_cgroups_driver: systemd    # docker cgroup fs driver: cgroupfs,systemd
    docker_registry_mirrors: []       # docker registry mirror list
    docker_exporter_port: 9323        # docker metrics exporter port, 9323 by default
    docker_image: []                  # docker image to be pulled after bootstrap
    docker_image_cache: /tmp/docker/*.tgz # docker image cache glob pattern

    #================================================================#
    #                         VARS: ETCD                             #
    #================================================================#
    #etcd_seq: 1                      # etcd instance identifier, explicitly required
    etcd_cluster: etcd                # etcd cluster & group name, etcd by default
    etcd_safeguard: false             # prevent purging running etcd instance?
    etcd_clean: true                  # purging existing etcd during initialization?
    etcd_data: /data/etcd             # etcd data directory, /data/etcd by default
    etcd_port: 2379                   # etcd client port, 2379 by default
    etcd_peer_port: 2380              # etcd peer port, 2380 by default
    etcd_init: new                    # etcd initial cluster state, new or existing
    etcd_election_timeout: 1000       # etcd election timeout, 1000ms by default
    etcd_heartbeat_interval: 100      # etcd heartbeat interval, 100ms by default
    etcd_root_password: Etcd.Root     # etcd root password for RBAC, change it!


    #================================================================#
    #                         VARS: MINIO                            #
    #================================================================#
    #minio_seq: 1                     # minio instance identifier, REQUIRED
    minio_cluster: minio              # minio cluster identifier, REQUIRED
    minio_clean: false                # cleanup minio during init?, false by default
    minio_user: minio                 # minio os user, `minio` by default
    minio_https: true                 # use https for minio, true by default
    minio_node: '${minio_cluster}-${minio_seq}.pigsty' # minio node name pattern
    minio_data: '/data/minio'         # minio data dir(s), use {x...y} to specify multi drivers
    #minio_volumes:                   # minio data volumes, override defaults if specified
    minio_domain: sss.pigsty          # minio external domain name, `sss.pigsty` by default
    minio_port: 9000                  # minio service port, 9000 by default
    minio_admin_port: 9001            # minio console port, 9001 by default
    minio_access_key: minioadmin      # root access key, `minioadmin` by default
    minio_secret_key: S3User.MinIO    # root secret key, `S3User.MinIO` by default
    minio_extra_vars: ''              # extra environment variables
    minio_provision: true             # run minio provisioning tasks?
    minio_alias: sss                  # alias name for local minio deployment
    #minio_endpoint: https://sss.pigsty:9000 # if not specified, overwritten by defaults
    minio_buckets:                    # list of minio bucket to be created
      - { name: pgsql }
      - { name: meta ,versioning: true }
      - { name: data }
    minio_users:                      # list of minio user to be created
      - { access_key: pgbackrest  ,secret_key: S3User.Backup ,policy: pgsql }
      - { access_key: s3user_meta ,secret_key: S3User.Meta   ,policy: meta  }
      - { access_key: s3user_data ,secret_key: S3User.Data   ,policy: data  }


    #================================================================#
    #                         VARS: REDIS                            #
    #================================================================#
    #redis_cluster:        <CLUSTER> # redis cluster name, required identity parameter
    #redis_node: 1            <NODE> # redis node sequence number, node int id required
    #redis_instances: {}      <NODE> # redis instances definition on this redis node
    redis_fs_main: /data              # redis main data mountpoint, `/data` by default
    redis_exporter_enabled: true      # install redis exporter on redis nodes?
    redis_exporter_port: 9121         # redis exporter listen port, 9121 by default
    redis_exporter_options: ''        # cli args and extra options for redis exporter
    redis_safeguard: false            # prevent purging running redis instance?
    redis_clean: true                 # purging existing redis during init?
    redis_rmdata: true                # remove redis data when purging redis server?
    redis_mode: standalone            # redis mode: standalone,cluster,sentinel
    redis_conf: redis.conf            # redis config template path, except sentinel
    redis_bind_address: '0.0.0.0'     # redis bind address, empty string will use host ip
    redis_max_memory: 1GB             # max memory used by each redis instance
    redis_mem_policy: allkeys-lru     # redis memory eviction policy
    redis_password: ''                # redis password, empty string will disable password
    redis_rdb_save: ['1200 1']        # redis rdb save directives, disable with empty list
    redis_aof_enabled: false          # enable redis append only file?
    redis_rename_commands: {}         # rename redis dangerous commands
    redis_cluster_replicas: 1         # replica number for one master in redis cluster
    redis_sentinel_monitor: []        # sentinel master list, works on sentinel cluster only


    #================================================================#
    #                         VARS: PGSQL                            #
    #================================================================#

    #-----------------------------------------------------------------
    # PG_IDENTITY
    #-----------------------------------------------------------------
    pg_mode: pgsql          #CLUSTER  # pgsql cluster mode: pgsql,citus,gpsql,mssql,mysql,ivory,polar
    # pg_cluster:           #CLUSTER  # pgsql cluster name, required identity parameter
    # pg_seq: 0             #INSTANCE # pgsql instance seq number, required identity parameter
    # pg_role: replica      #INSTANCE # pgsql role, required, could be primary,replica,offline
    # pg_instances: {}      #INSTANCE # define multiple pg instances on node in `{port:ins_vars}` format
    # pg_upstream:          #INSTANCE # repl upstream ip addr for standby cluster or cascade replica
    # pg_shard:             #CLUSTER  # pgsql shard name, optional identity for sharding clusters
    # pg_group: 0           #CLUSTER  # pgsql shard index number, optional identity for sharding clusters
    # gp_role: master       #CLUSTER  # greenplum role of this cluster, could be master or segment
    pg_offline_query: false #INSTANCE # set to true to enable offline queries on this instance

    #-----------------------------------------------------------------
    # PG_BUSINESS
    #-----------------------------------------------------------------
    # postgres business object definition, overwrite in group vars
    pg_users: []                      # postgres business users
    pg_databases: []                  # postgres business databases
    pg_services: []                   # postgres business services
    pg_hba_rules: []                  # business hba rules for postgres
    pgb_hba_rules: []                 # business hba rules for pgbouncer
    # global credentials, overwrite in global vars
    pg_dbsu_password: ''              # dbsu password, empty string means no dbsu password by default
    pg_replication_username: replicator
    pg_replication_password: DBUser.Replicator
    pg_admin_username: dbuser_dba
    pg_admin_password: DBUser.DBA
    pg_monitor_username: dbuser_monitor
    pg_monitor_password: DBUser.Monitor

    #-----------------------------------------------------------------
    # PG_INSTALL
    #-----------------------------------------------------------------
    pg_dbsu: postgres                 # os dbsu name, postgres by default, better not change it
    pg_dbsu_uid: 26                   # os dbsu uid and gid, 26 for default postgres users and groups
    pg_dbsu_sudo: limit               # dbsu sudo privilege, none,limit,all,nopass. limit by default
    pg_dbsu_home: /var/lib/pgsql      # postgresql home directory, `/var/lib/pgsql` by default
    pg_dbsu_ssh_exchange: true        # exchange postgres dbsu ssh key among same pgsql cluster
    pg_version: 18                    # postgres major version to be installed, 17 by default
    pg_bin_dir: /usr/pgsql/bin        # postgres binary dir, `/usr/pgsql/bin` by default
    pg_log_dir: /pg/log/postgres      # postgres log dir, `/pg/log/postgres` by default
    pg_packages:                      # pg packages to be installed, alias can be used
      - pgsql-main pgsql-common
    pg_extensions: []                 # pg extensions to be installed, alias can be used

    #-----------------------------------------------------------------
    # PG_BOOTSTRAP
    #-----------------------------------------------------------------
    pg_data: /pg/data                 # postgres data directory, `/pg/data` by default
    pg_fs_main: /data/postgres        # postgres main data directory, `/data/postgres` by default
    pg_fs_backup: /data/backups       # postgres backup data directory, `/data/backups` by default
    pg_storage_type: SSD              # storage type for pg main data, SSD,HDD, SSD by default
    pg_dummy_filesize: 64MiB          # size of `/pg/dummy`, hold 64MB disk space for emergency use
    pg_listen: '0.0.0.0'              # postgres/pgbouncer listen addresses, comma separated list
    pg_port: 5432                     # postgres listen port, 5432 by default
    pg_localhost: /var/run/postgresql # postgres unix socket dir for localhost connection
    patroni_enabled: true             # if disabled, no postgres cluster will be created during init
    patroni_mode: default             # patroni working mode: default,pause,remove
    pg_namespace: /pg                 # top level key namespace in etcd, used by patroni & vip
    patroni_port: 8008                # patroni listen port, 8008 by default
    patroni_log_dir: /pg/log/patroni  # patroni log dir, `/pg/log/patroni` by default
    patroni_ssl_enabled: false        # secure patroni RestAPI communications with SSL?
    patroni_watchdog_mode: off        # patroni watchdog mode: automatic,required,off. off by default
    patroni_username: postgres        # patroni restapi username, `postgres` by default
    patroni_password: Patroni.API     # patroni restapi password, `Patroni.API` by default
    pg_etcd_password: ''              # etcd password for this pg cluster, '' to use pg_cluster
    pg_primary_db: postgres           # primary database name, used by citus,etc... ,postgres by default
    pg_parameters: {}                 # extra parameters in postgresql.auto.conf
    pg_files: []                      # extra files to be copied to postgres data directory (e.g. license)
    pg_conf: oltp.yml                 # config template: oltp,olap,crit,tiny. `oltp.yml` by default
    pg_max_conn: auto                 # postgres max connections, `auto` will use recommended value
    pg_shared_buffer_ratio: 0.25      # postgres shared buffers ratio, 0.25 by default, 0.1~0.4
    pg_io_method: worker              # io method for postgres, auto,fsync,worker,io_uring, worker by default
    pg_rto: 30                        # recovery time objective in seconds,  `30s` by default
    pg_rpo: 1048576                   # recovery point objective in bytes, `1MiB` at most by default
    pg_libs: 'pg_stat_statements, auto_explain'  # preloaded libraries, `pg_stat_statements,auto_explain` by default
    pg_delay: 0                       # replication apply delay for standby cluster leader
    pg_checksum: true                 # enable data checksum for postgres cluster?
    pg_encoding: UTF8                 # database cluster encoding, `UTF8` by default
    pg_locale: C                      # database cluster local, `C` by default
    pg_lc_collate: C                  # database cluster collate, `C` by default
    pg_lc_ctype: C                    # database character type, `C` by default
    #pgsodium_key: ""                 # pgsodium key, 64 hex digit, default to sha256(pg_cluster)
    #pgsodium_getkey_script: ""       # pgsodium getkey script path, pgsodium_getkey by default

    #-----------------------------------------------------------------
    # PG_PROVISION
    #-----------------------------------------------------------------
    pg_provision: true                # provision postgres cluster after bootstrap
    pg_init: pg-init                  # provision init script for cluster template, `pg-init` by default
    pg_default_roles:                 # default roles and users in postgres cluster
      - { name: dbrole_readonly  ,login: false ,comment: role for global read-only access     }
      - { name: dbrole_offline   ,login: false ,comment: role for restricted read-only access }
      - { name: dbrole_readwrite ,login: false ,roles: [dbrole_readonly] ,comment: role for global read-write access }
      - { name: dbrole_admin     ,login: false ,roles: [pg_monitor, dbrole_readwrite] ,comment: role for object creation }
      - { name: postgres     ,superuser: true  ,comment: system superuser }
      - { name: replicator ,replication: true  ,roles: [pg_monitor, dbrole_readonly] ,comment: system replicator }
      - { name: dbuser_dba   ,superuser: true  ,roles: [dbrole_admin]  ,pgbouncer: true ,pool_mode: session, pool_connlimit: 16 ,comment: pgsql admin user }
      - { name: dbuser_monitor ,roles: [pg_monitor] ,pgbouncer: true ,parameters: {log_min_duration_statement: 1000 } ,pool_mode: session ,pool_connlimit: 8 ,comment: pgsql monitor user }
    pg_default_privileges:            # default privileges when created by admin user
      - GRANT USAGE      ON SCHEMAS   TO dbrole_readonly
      - GRANT SELECT     ON TABLES    TO dbrole_readonly
      - GRANT SELECT     ON SEQUENCES TO dbrole_readonly
      - GRANT EXECUTE    ON FUNCTIONS TO dbrole_readonly
      - GRANT USAGE      ON SCHEMAS   TO dbrole_offline
      - GRANT SELECT     ON TABLES    TO dbrole_offline
      - GRANT SELECT     ON SEQUENCES TO dbrole_offline
      - GRANT EXECUTE    ON FUNCTIONS TO dbrole_offline
      - GRANT INSERT     ON TABLES    TO dbrole_readwrite
      - GRANT UPDATE     ON TABLES    TO dbrole_readwrite
      - GRANT DELETE     ON TABLES    TO dbrole_readwrite
      - GRANT USAGE      ON SEQUENCES TO dbrole_readwrite
      - GRANT UPDATE     ON SEQUENCES TO dbrole_readwrite
      - GRANT TRUNCATE   ON TABLES    TO dbrole_admin
      - GRANT REFERENCES ON TABLES    TO dbrole_admin
      - GRANT TRIGGER    ON TABLES    TO dbrole_admin
      - GRANT CREATE     ON SCHEMAS   TO dbrole_admin
    pg_default_schemas: [ monitor ]   # default schemas to be created
    pg_default_extensions:            # default extensions to be created
      - { name: pg_stat_statements ,schema: monitor }
      - { name: pgstattuple        ,schema: monitor }
      - { name: pg_buffercache     ,schema: monitor }
      - { name: pageinspect        ,schema: monitor }
      - { name: pg_prewarm         ,schema: monitor }
      - { name: pg_visibility      ,schema: monitor }
      - { name: pg_freespacemap    ,schema: monitor }
      - { name: postgres_fdw       ,schema: public  }
      - { name: file_fdw           ,schema: public  }
      - { name: btree_gist         ,schema: public  }
      - { name: btree_gin          ,schema: public  }
      - { name: pg_trgm            ,schema: public  }
      - { name: intagg             ,schema: public  }
      - { name: intarray           ,schema: public  }
      - { name: pg_repack }
    pg_reload: true                   # reload postgres after hba changes
    pg_default_hba_rules:             # postgres default host-based authentication rules, order by `order`
      - {user: '${dbsu}'    ,db: all         ,addr: local     ,auth: ident ,title: 'dbsu access via local os user ident'  ,order: 100}
      - {user: '${dbsu}'    ,db: replication ,addr: local     ,auth: ident ,title: 'dbsu replication from local os ident' ,order: 150}
      - {user: '${repl}'    ,db: replication ,addr: localhost ,auth: pwd   ,title: 'replicator replication from localhost',order: 200}
      - {user: '${repl}'    ,db: replication ,addr: intra     ,auth: pwd   ,title: 'replicator replication from intranet' ,order: 250}
      - {user: '${repl}'    ,db: postgres    ,addr: intra     ,auth: pwd   ,title: 'replicator postgres db from intranet' ,order: 300}
      - {user: '${monitor}' ,db: all         ,addr: localhost ,auth: pwd   ,title: 'monitor from localhost with password' ,order: 350}
      - {user: '${monitor}' ,db: all         ,addr: infra     ,auth: pwd   ,title: 'monitor from infra host with password',order: 400}
      - {user: '${admin}'   ,db: all         ,addr: infra     ,auth: ssl   ,title: 'admin @ infra nodes with pwd & ssl'   ,order: 450}
      - {user: '${admin}'   ,db: all         ,addr: world     ,auth: ssl   ,title: 'admin @ everywhere with ssl & pwd'    ,order: 500}
      - {user: '+dbrole_readonly',db: all    ,addr: localhost ,auth: pwd   ,title: 'pgbouncer read/write via local socket',order: 550}
      - {user: '+dbrole_readonly',db: all    ,addr: intra     ,auth: pwd   ,title: 'read/write biz user via password'     ,order: 600}
      - {user: '+dbrole_offline' ,db: all    ,addr: intra     ,auth: pwd   ,title: 'allow etl offline tasks from intranet',order: 650}
    pgb_default_hba_rules:            # pgbouncer default host-based authentication rules, order by `order`
      - {user: '${dbsu}'    ,db: pgbouncer   ,addr: local     ,auth: peer  ,title: 'dbsu local admin access with os ident',order: 100}
      - {user: 'all'        ,db: all         ,addr: localhost ,auth: pwd   ,title: 'allow all user local access with pwd' ,order: 150}
      - {user: '${monitor}' ,db: pgbouncer   ,addr: intra     ,auth: pwd   ,title: 'monitor access via intranet with pwd' ,order: 200}
      - {user: '${monitor}' ,db: all         ,addr: world     ,auth: deny  ,title: 'reject all other monitor access addr' ,order: 250}
      - {user: '${admin}'   ,db: all         ,addr: intra     ,auth: pwd   ,title: 'admin access via intranet with pwd'   ,order: 300}
      - {user: '${admin}'   ,db: all         ,addr: world     ,auth: deny  ,title: 'reject all other admin access addr'   ,order: 350}
      - {user: 'all'        ,db: all         ,addr: intra     ,auth: pwd   ,title: 'allow all user intra access with pwd' ,order: 400}

    #-----------------------------------------------------------------
    # PG_BACKUP
    #-----------------------------------------------------------------
    pgbackrest_enabled: true          # enable pgbackrest on pgsql host?
    pgbackrest_log_dir: /pg/log/pgbackrest # pgbackrest log dir, `/pg/log/pgbackrest` by default
    pgbackrest_method: local          # pgbackrest repo method: local,minio,[user-defined...]
    pgbackrest_init_backup: true      # take a full backup after pgbackrest is initialized?
    pgbackrest_repo:                  # pgbackrest repo: https://pgbackrest.org/configuration.html#section-repository
      local:                          # default pgbackrest repo with local posix fs
        path: /pg/backup              # local backup directory, `/pg/backup` by default
        retention_full_type: count    # retention full backups by count
        retention_full: 2             # keep 2, at most 3 full backups when using local fs repo
      minio:                          # optional minio repo for pgbackrest
        type: s3                      # minio is s3-compatible, so s3 is used
        s3_endpoint: sss.pigsty       # minio endpoint domain name, `sss.pigsty` by default
        s3_region: us-east-1          # minio region, us-east-1 by default, useless for minio
        s3_bucket: pgsql              # minio bucket name, `pgsql` by default
        s3_key: pgbackrest            # minio user access key for pgbackrest
        s3_key_secret: S3User.Backup  # minio user secret key for pgbackrest
        s3_uri_style: path            # use path style uri for minio rather than host style
        path: /pgbackrest             # minio backup path, default is `/pgbackrest`
        storage_port: 9000            # minio port, 9000 by default
        storage_ca_file: /etc/pki/ca.crt  # minio ca file path, `/etc/pki/ca.crt` by default
        block: y                      # Enable block incremental backup
        bundle: y                     # bundle small files into a single file
        bundle_limit: 20MiB           # Limit for file bundles, 20MiB for object storage
        bundle_size: 128MiB           # Target size for file bundles, 128MiB for object storage
        cipher_type: aes-256-cbc      # enable AES encryption for remote backup repo
        cipher_pass: pgBackRest       # AES encryption password, default is 'pgBackRest'
        retention_full_type: time     # retention full backup by time on minio repo
        retention_full: 14            # keep full backup for the the last 14 days

    #-----------------------------------------------------------------
    # PG_ACCESS
    #-----------------------------------------------------------------
    pgbouncer_enabled: true           # if disabled, pgbouncer will not be launched on pgsql host
    pgbouncer_port: 6432              # pgbouncer listen port, 6432 by default
    pgbouncer_log_dir: /pg/log/pgbouncer  # pgbouncer log dir, `/pg/log/pgbouncer` by default
    pgbouncer_auth_query: false       # query postgres to retrieve unlisted business users?
    pgbouncer_poolmode: transaction   # pooling mode: transaction,session,statement, transaction by default
    pgbouncer_sslmode: disable        # pgbouncer client ssl mode, disable by default
    pgbouncer_ignore_param: [ extra_float_digits, application_name, TimeZone, DateStyle, IntervalStyle, search_path ]
    pg_weight: 100          #INSTANCE # relative load balance weight in service, 100 by default, 0-255
    pg_service_provider: ''           # dedicate haproxy node group name, or empty string for local nodes by default
    pg_default_service_dest: pgbouncer # default service destination if svc.dest='default'
    pg_default_services:              # postgres default service definitions
      - { name: primary ,port: 5433 ,dest: default  ,check: /primary   ,selector: "[]" }
      - { name: replica ,port: 5434 ,dest: default  ,check: /read-only ,selector: "[]" , backup: "[? pg_role == `primary` || pg_role == `offline` ]" }
      - { name: default ,port: 5436 ,dest: postgres ,check: /primary   ,selector: "[]" }
      - { name: offline ,port: 5438 ,dest: postgres ,check: /replica   ,selector: "[? pg_role == `offline` || pg_offline_query ]" , backup: "[? pg_role == `replica` && !pg_offline_query]"}
    pg_vip_enabled: false             # enable a l2 vip for pgsql primary? false by default
    pg_vip_address: 127.0.0.1/24      # vip address in `<ipv4>/<mask>` format, require if vip is enabled
    pg_vip_interface: eth0            # vip network interface to listen, eth0 by default
    pg_dns_suffix: ''                 # pgsql dns suffix, '' by default
    pg_dns_target: auto               # auto, primary, vip, none, or ad hoc ip

    #-----------------------------------------------------------------
    # PG_MONITOR
    #-----------------------------------------------------------------
    pg_exporter_enabled: true              # enable pg_exporter on pgsql hosts?
    pg_exporter_config: pg_exporter.yml    # pg_exporter configuration file name
    pg_exporter_cache_ttls: '1,10,60,300'  # pg_exporter collector ttl stage in seconds, '1,10,60,300' by default
    pg_exporter_port: 9630                 # pg_exporter listen port, 9630 by default
    pg_exporter_params: 'sslmode=disable'  # extra url parameters for pg_exporter dsn
    pg_exporter_url: ''                    # overwrite auto-generate pg dsn if specified
    pg_exporter_auto_discovery: true       # enable auto database discovery? enabled by default
    pg_exporter_exclude_database: 'template0,template1,postgres' # csv of database that WILL NOT be monitored during auto-discovery
    pg_exporter_include_database: ''       # csv of database that WILL BE monitored during auto-discovery
    pg_exporter_connect_timeout: 200       # pg_exporter connect timeout in ms, 200 by default
    pg_exporter_options: ''                # overwrite extra options for pg_exporter
    pgbouncer_exporter_enabled: true       # enable pgbouncer_exporter on pgsql hosts?
    pgbouncer_exporter_port: 9631          # pgbouncer_exporter listen port, 9631 by default
    pgbouncer_exporter_url: ''             # overwrite auto-generate pgbouncer dsn if specified
    pgbouncer_exporter_options: ''         # overwrite extra options for pgbouncer_exporter
    pgbackrest_exporter_enabled: true      # enable pgbackrest_exporter on pgsql hosts?
    pgbackrest_exporter_port: 9854         # pgbackrest_exporter listen port, 9854 by default
    pgbackrest_exporter_options: >
      --collect.interval=120
      --log.level=info

    #-----------------------------------------------------------------
    # PG_REMOVE
    #-----------------------------------------------------------------
    pg_safeguard: false               # stop pg_remove running if pg_safeguard is enabled, false by default
    pg_rm_data: true                  # remove postgres data during remove? true by default
    pg_rm_backup: true                # remove pgbackrest backup during primary remove? true by default
    pg_rm_pkg: true                   # uninstall postgres packages during remove? true by default

...

配置解读

demo/el 模板是针对 Enterprise Linux 系列发行版优化的配置。

支持的发行版

  • RHEL 8/9/10
  • Rocky Linux 8/9/10
  • Alma Linux 8/9/10
  • Oracle Linux 8/9

关键特性

  • 使用 EPEL 和 PGDG 软件源
  • 针对 YUM/DNF 包管理器优化
  • 支持 EL 系列特定的软件包名称

适用场景

  • 企业生产环境(推荐 RHEL/Rocky/Alma)
  • 需要长期支持和稳定性保障
  • 使用红帽生态系统的环境

32 - demo/debian

Debian/Ubuntu 专用配置模板

demo/debian 配置模板是针对 Debian 和 Ubuntu 发行版优化的配置模板。


配置概览

  • 配置名称: demo/debian
  • 节点数量: 单节点
  • 配置说明:Debian/Ubuntu 专用配置模板
  • 适用系统:d12, d13, u22, u24
  • 适用架构:x86_64, aarch64
  • 相关配置:metademo/el

启用方式:

./configure -c demo/debian [-i <primary_ip>]

配置内容

源文件地址:pigsty/conf/demo/debian.yml

---
#==============================================================#
# File      :   debian.yml
# Desc      :   Default parameters for Debian/Ubuntu in Pigsty
# Ctime     :   2020-05-22
# Mtime     :   2025-12-27
# Docs      :   https://doc.pgsty.com/config
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#


#==============================================================#
#                        Sandbox (4-node)                      #
#==============================================================#
# admin user : vagrant  (nopass ssh & sudo already set)        #
# 1.  meta    :    10.10.10.10     (2 Core | 4GB)    pg-meta   #
# 2.  node-1  :    10.10.10.11     (1 Core | 1GB)    pg-test-1 #
# 3.  node-2  :    10.10.10.12     (1 Core | 1GB)    pg-test-2 #
# 4.  node-3  :    10.10.10.13     (1 Core | 1GB)    pg-test-3 #
# (replace these ip if your 4-node env have different ip addr) #
# VIP 2: (l2 vip is available inside same LAN )                #
#     pg-meta --->  10.10.10.2 ---> 10.10.10.10                #
#     pg-test --->  10.10.10.3 ---> 10.10.10.1{1,2,3}          #
#==============================================================#


all:

  ##################################################################
  #                            CLUSTERS                            #
  ##################################################################
  # meta nodes, nodes, pgsql, redis, pgsql clusters are defined as
  # k:v pair inside `all.children`. Where the key is cluster name
  # and value is cluster definition consist of two parts:
  # `hosts`: cluster members ip and instance level variables
  # `vars` : cluster level variables
  ##################################################################
  children:                                 # groups definition

    # infra cluster for proxy, monitor, alert, etc..
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }

    # etcd cluster for ha postgres
    etcd: { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }

    # minio cluster, s3 compatible object storage
    minio: { hosts: { 10.10.10.10: { minio_seq: 1 } }, vars: { minio_cluster: minio } }

    #----------------------------------#
    # pgsql cluster: pg-meta (CMDB)    #
    #----------------------------------#
    pg-meta:
      hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary , pg_offline_query: true } }
      vars:
        pg_cluster: pg-meta

        # define business databases here: https://doc.pgsty.com/pgsql/db
        pg_databases:                       # define business databases on this cluster, array of database definition
          - name: meta                      # REQUIRED, `name` is the only mandatory field of a database definition
            #state: create                  # optional, create|absent|recreate, create by default
            baseline: cmdb.sql              # optional, database sql baseline path, (relative path among ansible search path, e.g: files/)
            schemas: [pigsty]               # optional, additional schemas to be created, array of schema names
            extensions:                     # optional, additional extensions to be installed: array of `{name[,schema]}`
              - { name: vector }            # install pgvector extension on this database by default
            comment: pigsty meta database   # optional, comment string for this database
            #pgbouncer: true                # optional, add this database to pgbouncer database list? true by default
            #owner: postgres                # optional, database owner, current user if not specified
            #template: template1            # optional, which template to use, template1 by default
            #strategy: FILE_COPY            # optional, clone strategy: FILE_COPY or WAL_LOG (PG15+), default to PG's default
            #encoding: UTF8                 # optional, inherited from template / cluster if not defined (UTF8)
            #locale: C                      # optional, inherited from template / cluster if not defined (C)
            #lc_collate: C                  # optional, inherited from template / cluster if not defined (C)
            #lc_ctype: C                    # optional, inherited from template / cluster if not defined (C)
            #locale_provider: libc          # optional, locale provider: libc, icu, builtin (PG15+)
            #icu_locale: en-US              # optional, icu locale for icu locale provider (PG15+)
            #icu_rules: ''                  # optional, icu rules for icu locale provider (PG16+)
            #builtin_locale: C.UTF-8        # optional, builtin locale for builtin locale provider (PG17+)
            #tablespace: pg_default         # optional, default tablespace, pg_default by default
            #is_template: false             # optional, mark database as template, allowing clone by any user with CREATEDB privilege
            #allowconn: true                # optional, allow connection, true by default. false will disable connect at all
            #revokeconn: false              # optional, revoke public connection privilege. false by default. (leave connect with grant option to owner)
            #register_datasource: true      # optional, register this database to grafana datasources? true by default
            #connlimit: -1                  # optional, database connection limit, default -1 disable limit
            #pool_auth_user: dbuser_meta    # optional, all connection to this pgbouncer database will be authenticated by this user
            #pool_mode: transaction         # optional, pgbouncer pool mode at database level, default transaction
            #pool_size: 64                  # optional, pgbouncer pool size at database level, default 64
            #pool_size_reserve: 32          # optional, pgbouncer pool size reserve at database level, default 32
            #pool_size_min: 0               # optional, pgbouncer pool size min at database level, default 0
            #pool_max_db_conn: 100          # optional, max database connections at database level, default 100
          #- { name: grafana  ,owner: dbuser_grafana  ,revokeconn: true ,comment: grafana primary database }
          #- { name: bytebase ,owner: dbuser_bytebase ,revokeconn: true ,comment: bytebase primary database }
          #- { name: kong     ,owner: dbuser_kong     ,revokeconn: true ,comment: kong the api gateway database }
          #- { name: gitea    ,owner: dbuser_gitea    ,revokeconn: true ,comment: gitea meta database }
          #- { name: wiki     ,owner: dbuser_wiki     ,revokeconn: true ,comment: wiki meta database }

        # define business users here: https://doc.pgsty.com/pgsql/user
        pg_users:                           # define business users/roles on this cluster, array of user definition
          - name: dbuser_meta               # REQUIRED, `name` is the only mandatory field of a user definition
            password: DBUser.Meta           # optional, password, can be a scram-sha-256 hash string or plain text
            #login: true                     # optional, can log in, true by default  (new biz ROLE should be false)
            #superuser: false                # optional, is superuser? false by default
            #createdb: false                 # optional, can create database? false by default
            #createrole: false               # optional, can create role? false by default
            #inherit: true                   # optional, can this role use inherited privileges? true by default
            #replication: false              # optional, can this role do replication? false by default
            #bypassrls: false                # optional, can this role bypass row level security? false by default
            #pgbouncer: true                 # optional, add this user to pgbouncer user-list? false by default (production user should be true explicitly)
            #connlimit: -1                   # optional, user connection limit, default -1 disable limit
            #expire_in: 3650                 # optional, now + n days when this role is expired (OVERWRITE expire_at)
            #expire_at: '2030-12-31'         # optional, YYYY-MM-DD 'timestamp' when this role is expired  (OVERWRITTEN by expire_in)
            #comment: pigsty admin user      # optional, comment string for this user/role
            #roles: [dbrole_admin]           # optional, belonged roles. default roles are: dbrole_{admin,readonly,readwrite,offline}
            #parameters: {}                  # optional, role level parameters with `ALTER ROLE SET`
            #pool_mode: transaction          # optional, pgbouncer pool mode at user level, transaction by default
            #pool_connlimit: -1              # optional, max database connections at user level, default -1 disable limit
          - {name: dbuser_view     ,password: DBUser.Viewer   ,pgbouncer: true ,roles: [dbrole_readonly], comment: read-only viewer for meta database}
          #- {name: dbuser_grafana  ,password: DBUser.Grafana  ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for grafana database   }
          #- {name: dbuser_bytebase ,password: DBUser.Bytebase ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for bytebase database  }
          #- {name: dbuser_gitea    ,password: DBUser.Gitea    ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for gitea service      }
          #- {name: dbuser_wiki     ,password: DBUser.Wiki     ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for wiki.js service    }

        # define business service here: https://doc.pgsty.com/pgsql/service
        pg_services:                        # extra services in addition to pg_default_services, array of service definition
          # standby service will route {ip|name}:5435 to sync replica's pgbouncer (5435->6432 standby)
          - name: standby                   # required, service name, the actual svc name will be prefixed with `pg_cluster`, e.g: pg-meta-standby
            port: 5435                      # required, service exposed port (work as kubernetes service node port mode)
            ip: "*"                         # optional, service bind ip address, `*` for all ip by default
            selector: "[]"                  # required, service member selector, use JMESPath to filter inventory
            dest: default                   # optional, destination port, default|postgres|pgbouncer|<port_number>, 'default' by default
            check: /sync                    # optional, health check url path, / by default
            backup: "[? pg_role == `primary`]"  # backup server selector
            maxconn: 3000                   # optional, max allowed front-end connection
            balance: roundrobin             # optional, haproxy load balance algorithm (roundrobin by default, other: leastconn)
            options: 'inter 3s fastinter 1s downinter 5s rise 3 fall 3 on-marked-down shutdown-sessions slowstart 30s maxconn 3000 maxqueue 128 weight 100'

        # define pg extensions: https://doc.pgsty.com/pgsql/extension
        pg_libs: 'pg_stat_statements, auto_explain' # add timescaledb to shared_preload_libraries
        #pg_extensions: [] # extensions to be installed on this cluster

        # define HBA rules here: https://doc.pgsty.com/pgsql/hba
        pg_hba_rules:
          - {user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes'}

        pg_vip_enabled: true
        pg_vip_address: 10.10.10.2/24
        pg_vip_interface: eth1

        node_crontab:  # make a full backup 1 am everyday
          - '00 01 * * * postgres /pg/bin/pg-backup full'

    #----------------------------------#
    # pgsql cluster: pg-test (3 nodes) #
    #----------------------------------#
    # pg-test --->  10.10.10.3 ---> 10.10.10.1{1,2,3}
    pg-test:                          # define the new 3-node cluster pg-test
      hosts:
        10.10.10.11: { pg_seq: 1, pg_role: primary }   # primary instance, leader of cluster
        10.10.10.12: { pg_seq: 2, pg_role: replica }   # replica instance, follower of leader
        10.10.10.13: { pg_seq: 3, pg_role: replica, pg_offline_query: true } # replica with offline access
      vars:
        pg_cluster: pg-test           # define pgsql cluster name
        pg_users:  [{ name: test , password: test , pgbouncer: true , roles: [ dbrole_admin ] }]
        pg_databases: [{ name: test }] # create a database and user named 'test'
        node_tune: tiny
        pg_conf: tiny.yml
        pg_vip_enabled: true
        pg_vip_address: 10.10.10.3/24
        pg_vip_interface: eth1
        node_crontab:  # make a full backup on monday 1am, and an incremental backup during weekdays
          - '00 01 * * 1 postgres /pg/bin/pg-backup full'
          - '00 01 * * 2,3,4,5,6,7 postgres /pg/bin/pg-backup'

    #----------------------------------#
    # redis ms, sentinel, native cluster
    #----------------------------------#
    redis-ms: # redis classic primary & replica
      hosts: { 10.10.10.10: { redis_node: 1 , redis_instances: { 6379: { }, 6380: { replica_of: '10.10.10.10 6379' } } } }
      vars: { redis_cluster: redis-ms ,redis_password: 'redis.ms' ,redis_max_memory: 64MB }

    redis-meta: # redis sentinel x 3
      hosts: { 10.10.10.11: { redis_node: 1 , redis_instances: { 26379: { } ,26380: { } ,26381: { } } } }
      vars:
        redis_cluster: redis-meta
        redis_password: 'redis.meta'
        redis_mode: sentinel
        redis_max_memory: 16MB
        redis_sentinel_monitor: # primary list for redis sentinel, use cls as name, primary ip:port
          - { name: redis-ms, host: 10.10.10.10, port: 6379 ,password: redis.ms, quorum: 2 }

    redis-test: # redis native cluster: 3m x 3s
      hosts:
        10.10.10.12: { redis_node: 1 ,redis_instances: { 6379: { } ,6380: { } ,6381: { } } }
        10.10.10.13: { redis_node: 2 ,redis_instances: { 6379: { } ,6380: { } ,6381: { } } }
      vars: { redis_cluster: redis-test ,redis_password: 'redis.test' ,redis_mode: cluster, redis_max_memory: 32MB }


  ####################################################################
  #                             VARS                                 #
  ####################################################################
  vars:                               # global variables


    #================================================================#
    #                         VARS: INFRA                            #
    #================================================================#

    #-----------------------------------------------------------------
    # META
    #-----------------------------------------------------------------
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default,china,europe
    language: en                      # default language: en, zh
    proxy_env:                        # global proxy env when downloading packages
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.myqcloud.com,*.tsinghua.edu.cn"
      # http_proxy:  # set your proxy here: e.g http://user:[email protected]
      # https_proxy: # set your proxy here: e.g http://user:[email protected]
      # all_proxy:   # set your proxy here: e.g http://user:[email protected]

    #-----------------------------------------------------------------
    # CA
    #-----------------------------------------------------------------
    ca_create: true                   # create ca if not exists? or just abort
    ca_cn: pigsty-ca                  # ca common name, fixed as pigsty-ca
    cert_validity: 7300d              # cert validity, 20 years by default

    #-----------------------------------------------------------------
    # INFRA_IDENTITY
    #-----------------------------------------------------------------
    #infra_seq: 1                     # infra node identity, explicitly required
    infra_portal:                     # infra services exposed via portal
      home : { domain: i.pigsty }     # default domain name
    infra_data: /data/infra           # default data path for infrastructure data

    #-----------------------------------------------------------------
    # REPO
    #-----------------------------------------------------------------
    repo_enabled: true                # create a yum repo on this infra node?
    repo_home: /www                   # repo home dir, `/www` by default
    repo_name: pigsty                 # repo name, pigsty by default
    repo_endpoint: http://${admin_ip}:80 # access point to this repo by domain or ip:port
    repo_remove: true                 # remove existing upstream repo
    repo_modules: infra,node,pgsql    # which repo modules are installed in repo_upstream
    repo_upstream:                    # where to download
      - { name: pigsty-local   ,description: 'Pigsty Local'       ,module: local   ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://${admin_ip}/pigsty ./' }}
      - { name: pigsty-pgsql   ,description: 'Pigsty PgSQL'       ,module: pgsql   ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://repo.pigsty.io/apt/pgsql/${distro_codename} ${distro_codename} main', china: 'https://repo.pigsty.cc/apt/pgsql/${distro_codename} ${distro_codename} main' }}
      - { name: pigsty-infra   ,description: 'Pigsty Infra'       ,module: infra   ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://repo.pigsty.io/apt/infra/ generic main' ,china: 'https://repo.pigsty.cc/apt/infra/ generic main' }}
      - { name: nginx          ,description: 'Nginx'              ,module: infra   ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://nginx.org/packages/${distro_name} ${distro_codename} nginx' }}
      - { name: docker-ce      ,description: 'Docker'             ,module: infra   ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.docker.com/linux/${distro_name} ${distro_codename} stable'                               ,china: 'https://mirrors.aliyun.com/docker-ce/linux/${distro_name} ${distro_codename} stable' }}
      - { name: base           ,description: 'Debian Basic'       ,module: node    ,releases: [11,12,13         ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://deb.debian.org/debian/ ${distro_codename} main non-free-firmware'                                  ,china: 'https://mirrors.aliyun.com/debian/ ${distro_codename} main restricted universe multiverse' }}
      - { name: updates        ,description: 'Debian Updates'     ,module: node    ,releases: [11,12,13         ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://deb.debian.org/debian/ ${distro_codename}-updates main non-free-firmware'                          ,china: 'https://mirrors.aliyun.com/debian/ ${distro_codename}-updates main restricted universe multiverse' }}
      - { name: security       ,description: 'Debian Security'    ,module: node    ,releases: [11,12,13         ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://security.debian.org/debian-security ${distro_codename}-security main non-free-firmware'            ,china: 'https://mirrors.aliyun.com/debian-security/ ${distro_codename}-security main non-free-firmware' }}
      - { name: base           ,description: 'Ubuntu Basic'       ,module: node    ,releases: [         20,22,24] ,arch: [x86_64         ] ,baseurl: { default: 'https://mirrors.edge.kernel.org/ubuntu/ ${distro_codename}           main universe multiverse restricted' ,china: 'https://mirrors.aliyun.com/ubuntu/ ${distro_codename}           main restricted universe multiverse' }}
      - { name: updates        ,description: 'Ubuntu Updates'     ,module: node    ,releases: [         20,22,24] ,arch: [x86_64         ] ,baseurl: { default: 'https://mirrors.edge.kernel.org/ubuntu/ ${distro_codename}-backports main restricted universe multiverse' ,china: 'https://mirrors.aliyun.com/ubuntu/ ${distro_codename}-updates   main restricted universe multiverse' }}
      - { name: backports      ,description: 'Ubuntu Backports'   ,module: node    ,releases: [         20,22,24] ,arch: [x86_64         ] ,baseurl: { default: 'https://mirrors.edge.kernel.org/ubuntu/ ${distro_codename}-security  main restricted universe multiverse' ,china: 'https://mirrors.aliyun.com/ubuntu/ ${distro_codename}-backports main restricted universe multiverse' }}
      - { name: security       ,description: 'Ubuntu Security'    ,module: node    ,releases: [         20,22,24] ,arch: [x86_64         ] ,baseurl: { default: 'https://mirrors.edge.kernel.org/ubuntu/ ${distro_codename}-updates   main restricted universe multiverse' ,china: 'https://mirrors.aliyun.com/ubuntu/ ${distro_codename}-security  main restricted universe multiverse' }}
      - { name: base           ,description: 'Ubuntu Basic'       ,module: node    ,releases: [         20,22,24] ,arch: [        aarch64] ,baseurl: { default: 'http://ports.ubuntu.com/ubuntu-ports/ ${distro_codename}             main universe multiverse restricted' ,china: 'https://mirrors.aliyun.com/ubuntu-ports/ ${distro_codename}           main restricted universe multiverse' }}
      - { name: updates        ,description: 'Ubuntu Updates'     ,module: node    ,releases: [         20,22,24] ,arch: [        aarch64] ,baseurl: { default: 'http://ports.ubuntu.com/ubuntu-ports/ ${distro_codename}-backports   main restricted universe multiverse' ,china: 'https://mirrors.aliyun.com/ubuntu-ports/ ${distro_codename}-updates   main restricted universe multiverse' }}
      - { name: backports      ,description: 'Ubuntu Backports'   ,module: node    ,releases: [         20,22,24] ,arch: [        aarch64] ,baseurl: { default: 'http://ports.ubuntu.com/ubuntu-ports/ ${distro_codename}-security    main restricted universe multiverse' ,china: 'https://mirrors.aliyun.com/ubuntu-ports/ ${distro_codename}-backports main restricted universe multiverse' }}
      - { name: security       ,description: 'Ubuntu Security'    ,module: node    ,releases: [         20,22,24] ,arch: [        aarch64] ,baseurl: { default: 'http://ports.ubuntu.com/ubuntu-ports/ ${distro_codename}-updates     main restricted universe multiverse' ,china: 'https://mirrors.aliyun.com/ubuntu-ports/ ${distro_codename}-security  main restricted universe multiverse' }}
      - { name: pgdg           ,description: 'PGDG'               ,module: pgsql   ,releases: [11,12,13,   22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://apt.postgresql.org/pub/repos/apt/ ${distro_codename}-pgdg main' ,china: 'https://mirrors.aliyun.com/postgresql/repos/apt/ ${distro_codename}-pgdg main' }}
      - { name: pgdg-beta      ,description: 'PGDG Beta'          ,module: beta    ,releases: [11,12,13,   22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://apt.postgresql.org/pub/repos/apt/ ${distro_codename}-pgdg-testing main 19' ,china: 'https://mirrors.aliyun.com/postgresql/repos/apt/ ${distro_codename}-pgdg-testing main 19' }}
      - { name: timescaledb    ,description: 'TimescaleDB'        ,module: extra   ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://packagecloud.io/timescale/timescaledb/${distro_name}/ ${distro_codename} main' }}
      - { name: citus          ,description: 'Citus'              ,module: extra   ,releases: [11,12,   20,22   ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://packagecloud.io/citusdata/community/${distro_name}/ ${distro_codename} main' } }
      - { name: percona        ,description: 'Percona TDE'        ,module: percona ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://repo.pigsty.io/apt/percona ${distro_codename} main' ,china: 'https://repo.pigsty.cc/apt/percona ${distro_codename} main' ,origin: 'http://repo.percona.com/ppg-18.1/apt ${distro_codename} main' }}
      - { name: wiltondb       ,description: 'WiltonDB'           ,module: mssql   ,releases: [         20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://repo.pigsty.io/apt/mssql/ ${distro_codename} main'  ,china: 'https://repo.pigsty.cc/apt/mssql/ ${distro_codename} main'  ,origin: 'https://ppa.launchpadcontent.net/wiltondb/wiltondb/ubuntu/ ${distro_codename} main'  }}
      - { name: groonga        ,description: 'Groonga Debian'     ,module: groonga ,releases: [11,12,13         ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://packages.groonga.org/debian/ ${distro_codename} main' }}
      - { name: groonga        ,description: 'Groonga Ubuntu'     ,module: groonga ,releases: [         20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://ppa.launchpadcontent.net/groonga/ppa/ubuntu/ ${distro_codename} main' }}
      - { name: mysql          ,description: 'MySQL'              ,module: mysql   ,releases: [11,12,   20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://repo.mysql.com/apt/${distro_name} ${distro_codename} mysql-8.0 mysql-tools', china: 'https://mirrors.tuna.tsinghua.edu.cn/mysql/apt/${distro_name} ${distro_codename} mysql-8.0 mysql-tools' }}
      - { name: mongo          ,description: 'MongoDB'            ,module: mongo   ,releases: [11,12,   20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://repo.mongodb.org/apt/${distro_name} ${distro_codename}/mongodb-org/8.0 multiverse', china: 'https://mirrors.aliyun.com/mongodb/apt/${distro_name} ${distro_codename}/mongodb-org/8.0 multiverse' }}
      - { name: redis          ,description: 'Redis'              ,module: redis   ,releases: [11,12,   20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://packages.redis.io/deb ${distro_codename} main' }}
      - { name: llvm           ,description: 'LLVM'               ,module: llvm    ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://apt.llvm.org/${distro_codename}/ llvm-toolchain-${distro_codename} main' ,china: 'https://mirrors.tuna.tsinghua.edu.cn/llvm-apt/${distro_codename}/ llvm-toolchain-${distro_codename} main' }}
      - { name: haproxyd       ,description: 'Haproxy Debian'     ,module: haproxy ,releases: [11,12            ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://haproxy.debian.net/ ${distro_codename}-backports-3.1 main' }}
      - { name: haproxyu       ,description: 'Haproxy Ubuntu'     ,module: haproxy ,releases: [         20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://ppa.launchpadcontent.net/vbernat/haproxy-3.1/ubuntu/ ${distro_codename} main' }}
      - { name: grafana        ,description: 'Grafana'            ,module: grafana ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://apt.grafana.com stable main' ,china: 'https://mirrors.aliyun.com/grafana/apt/ stable main' }}
      - { name: kubernetes     ,description: 'Kubernetes'         ,module: kube    ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://pkgs.k8s.io/core:/stable:/v1.33/deb/ /', china: 'https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.33/deb/ /' }}
      - { name: gitlab-ee      ,description: 'Gitlab EE'          ,module: gitlab  ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://packages.gitlab.com/gitlab/gitlab-ee/${distro_name}/ ${distro_codename} main' }}
      - { name: gitlab-ce      ,description: 'Gitlab CE'          ,module: gitlab  ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://packages.gitlab.com/gitlab/gitlab-ce/${distro_name}/ ${distro_codename} main' }}
      - { name: clickhouse     ,description: 'ClickHouse'         ,module: click   ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://packages.clickhouse.com/deb/ stable main', china: 'https://mirrors.aliyun.com/clickhouse/deb/ stable main' }}

    repo_packages: [ node-bootstrap, infra-package, infra-addons, node-package1, node-package2, pgsql-utility, extra-modules ]
    repo_extra_packages: [ pgsql-main ]
    repo_url_packages: []

    #-----------------------------------------------------------------
    # INFRA_PACKAGE
    #-----------------------------------------------------------------
    infra_packages:                   # packages to be installed on infra nodes
      - grafana,grafana-plugins,grafana-victorialogs-ds,grafana-victoriametrics-ds,victoria-metrics,victoria-logs,victoria-traces,vmutils,vlogscli,alertmanager
      - node-exporter,blackbox-exporter,nginx-exporter,pg-exporter,pev2,nginx,dnsmasq,ansible,etcd,python3-requests,redis,mcli,restic,certbot,python3-certbot-nginx
    infra_packages_pip: ''            # pip installed packages for infra nodes

    #-----------------------------------------------------------------
    # NGINX
    #-----------------------------------------------------------------
    nginx_enabled: true               # enable nginx on this infra node?
    nginx_clean: false                # clean existing nginx config during init?
    nginx_exporter_enabled: true      # enable nginx_exporter on this infra node?
    nginx_exporter_port: 9113         # nginx_exporter listen port, 9113 by default
    nginx_sslmode: enable             # nginx ssl mode? disable,enable,enforce
    nginx_cert_validity: 397d         # nginx self-signed cert validity, 397d by default
    nginx_home: /www                  # nginx content dir, `/www` by default (soft link to nginx_data)
    nginx_data: /data/nginx           # nginx actual data dir, /data/nginx by default
    nginx_users: { admin : pigsty }   # nginx basic auth users: name and pass dict
    nginx_port: 80                    # nginx listen port, 80 by default
    nginx_ssl_port: 443               # nginx ssl listen port, 443 by default
    certbot_sign: false               # sign nginx cert with certbot during setup?
    certbot_email: [email protected]     # certbot email address, used for free ssl
    certbot_options: ''               # certbot extra options

    #-----------------------------------------------------------------
    # DNS
    #-----------------------------------------------------------------
    dns_enabled: true                 # setup dnsmasq on this infra node?
    dns_port: 53                      # dns server listen port, 53 by default
    dns_records:                      # dynamic dns records resolved by dnsmasq
      - "${admin_ip} i.pigsty"
      - "${admin_ip} m.pigsty supa.pigsty api.pigsty adm.pigsty cli.pigsty ddl.pigsty"

    #-----------------------------------------------------------------
    # VICTORIA
    #-----------------------------------------------------------------
    vmetrics_enabled: true            # enable victoria-metrics on this infra node?
    vmetrics_clean: false             # whether clean existing victoria metrics data during init?
    vmetrics_port: 8428               # victoria-metrics listen port, 8428 by default
    vmetrics_scrape_interval: 10s     # victoria global scrape interval, 10s by default
    vmetrics_scrape_timeout: 8s       # victoria global scrape timeout, 8s by default
    vmetrics_options: >-
      -retentionPeriod=15d
      -promscrape.fileSDCheckInterval=5s
    vlogs_enabled: true               # enable victoria-logs on this infra node?
    vlogs_clean: false                # clean victoria-logs data during init?
    vlogs_port: 9428                  # victoria-logs listen port, 9428 by default
    vlogs_options: >-
      -retentionPeriod=15d
      -retention.maxDiskSpaceUsageBytes=50GiB
      -insert.maxLineSizeBytes=1MB
      -search.maxQueryDuration=120s
    vtraces_enabled: true             # enable victoria-traces on this infra node?
    vtraces_clean: false                # clean victoria-trace data during inti?
    vtraces_port: 10428               # victoria-traces listen port, 10428 by default
    vtraces_options: >-
      -retentionPeriod=15d
      -retention.maxDiskSpaceUsageBytes=50GiB
    vmalert_enabled: true             # enable vmalert on this infra node?
    vmalert_port: 8880                # vmalert listen port, 8880 by default
    vmalert_options: ''              # vmalert extra server options

    #-----------------------------------------------------------------
    # PROMETHEUS
    #-----------------------------------------------------------------
    blackbox_enabled: true            # setup blackbox_exporter on this infra node?
    blackbox_port: 9115               # blackbox_exporter listen port, 9115 by default
    blackbox_options: ''              # blackbox_exporter extra server options
    alertmanager_enabled: true        # setup alertmanager on this infra node?
    alertmanager_port: 9059           # alertmanager listen port, 9059 by default
    alertmanager_options: ''          # alertmanager extra server options
    exporter_metrics_path: /metrics   # exporter metric path, `/metrics` by default

    #-----------------------------------------------------------------
    # GRAFANA
    #-----------------------------------------------------------------
    grafana_enabled: true             # enable grafana on this infra node?
    grafana_port: 3000                # default listen port for grafana
    grafana_clean: false              # clean grafana data during init?
    grafana_admin_username: admin     # grafana admin username, `admin` by default
    grafana_admin_password: pigsty    # grafana admin password, `pigsty` by default
    grafana_auth_proxy: false         # enable grafana auth proxy?
    grafana_pgurl: ''                 # external postgres database url for grafana if given
    grafana_view_password: DBUser.Viewer # password for grafana meta pg datasource


    #================================================================#
    #                         VARS: NODE                             #
    #================================================================#

    #-----------------------------------------------------------------
    # NODE_IDENTITY
    #-----------------------------------------------------------------
    #nodename:           # [INSTANCE] # node instance identity, use hostname if missing, optional
    node_cluster: nodes   # [CLUSTER] # node cluster identity, use 'nodes' if missing, optional
    nodename_overwrite: true          # overwrite node's hostname with nodename?
    nodename_exchange: false          # exchange nodename among play hosts?
    node_id_from_pg: true             # use postgres identity as node identity if applicable?

    #-----------------------------------------------------------------
    # NODE_DNS
    #-----------------------------------------------------------------
    node_write_etc_hosts: true        # modify `/etc/hosts` on target node?
    node_default_etc_hosts:           # static dns records in `/etc/hosts`
      - "${admin_ip} i.pigsty"
    node_etc_hosts: []                # extra static dns records in `/etc/hosts`
    node_dns_method: add              # how to handle dns servers: add,none,overwrite
    node_dns_servers: ['${admin_ip}'] # dynamic nameserver in `/etc/resolv.conf`
    node_dns_options:                 # dns resolv options in `/etc/resolv.conf`
      - options single-request-reopen timeout:1

    #-----------------------------------------------------------------
    # NODE_PACKAGE
    #-----------------------------------------------------------------
    node_repo_modules: local          # upstream repo to be added on node, local by default
    node_repo_remove: true            # remove existing repo on node?
    node_packages: [openssh-server]   # packages to be installed current nodes with latest version
    node_default_packages:            # default packages to be installed on all nodes
      - lz4,unzip,bzip2,pv,jq,git,ncdu,make,patch,bash,lsof,wget,uuid,tuned,nvme-cli,numactl,sysstat,iotop,htop,rsync,tcpdump
      - python3,python3-pip,socat,lrzsz,net-tools,ipvsadm,telnet,ca-certificates,openssl,keepalived,etcd,haproxy,chrony,pig
      - zlib1g,acl,dnsutils,libreadline-dev,vim-tiny,node-exporter,openssh-server,openssh-client,vector

    #-----------------------------------------------------------------
    # NODE_SEC
    #-----------------------------------------------------------------
    node_selinux_mode: permissive     # set selinux mode: enforcing,permissive,disabled
    node_firewall_mode: zone          # firewall mode: off, none, zone, zone by default
    node_firewall_intranet:           # which intranet cidr considered as internal network
      - 10.0.0.0/8
      - 192.168.0.0/16
      - 172.16.0.0/12
    node_firewall_public_port:        # expose these ports to public network in (zone, strict) mode
      - 22                            # enable ssh access
      - 80                            # enable http access
      - 443                           # enable https access
      - 5432                          # enable postgresql access (think twice before exposing it!)

    #-----------------------------------------------------------------
    # NODE_TUNE
    #-----------------------------------------------------------------
    node_disable_numa: false          # disable node numa, reboot required
    node_disable_swap: false          # disable node swap, use with caution
    node_static_network: true         # preserve dns resolver settings after reboot
    node_disk_prefetch: false         # setup disk prefetch on HDD to increase performance
    node_kernel_modules: [ softdog, ip_vs, ip_vs_rr, ip_vs_wrr, ip_vs_sh ]
    node_hugepage_count: 0            # number of 2MB hugepage, take precedence over ratio
    node_hugepage_ratio: 0            # node mem hugepage ratio, 0 disable it by default
    node_overcommit_ratio: 0          # node mem overcommit ratio, 0 disable it by default
    node_tune: oltp                   # node tuned profile: none,oltp,olap,crit,tiny
    node_sysctl_params: { }           # sysctl parameters in k:v format in addition to tuned

    #-----------------------------------------------------------------
    # NODE_ADMIN
    #-----------------------------------------------------------------
    node_data: /data                  # node main data directory, `/data` by default
    node_admin_enabled: true          # create a admin user on target node?
    node_admin_uid: 88                # uid and gid for node admin user
    node_admin_username: dba          # name of node admin user, `dba` by default
    node_admin_sudo: nopass           # admin sudo privilege, all,nopass. nopass by default
    node_admin_ssh_exchange: true     # exchange admin ssh key among node cluster
    node_admin_pk_current: true       # add current user's ssh pk to admin authorized_keys
    node_admin_pk_list: []            # ssh public keys to be added to admin user
    node_aliases: {}                  # extra shell aliases to be added, k:v dict

    #-----------------------------------------------------------------
    # NODE_TIME
    #-----------------------------------------------------------------
    node_timezone: ''                 # setup node timezone, empty string to skip
    node_ntp_enabled: true            # enable chronyd time sync service?
    node_ntp_servers:                 # ntp servers in `/etc/chrony.conf`
      - pool pool.ntp.org iburst
    node_crontab_overwrite: true      # overwrite or append to `/etc/crontab`?
    node_crontab: [ ]                 # crontab entries in `/etc/crontab`

    #-----------------------------------------------------------------
    # NODE_VIP
    #-----------------------------------------------------------------
    vip_enabled: false                # enable vip on this node cluster?
    # vip_address:         [IDENTITY] # node vip address in ipv4 format, required if vip is enabled
    # vip_vrid:            [IDENTITY] # required, integer, 1-254, should be unique among same VLAN
    vip_role: backup                  # optional, `master|backup`, backup by default, use as init role
    vip_preempt: false                # optional, `true/false`, false by default, enable vip preemption
    vip_interface: eth0               # node vip network interface to listen, `eth0` by default
    vip_dns_suffix: ''                # node vip dns name suffix, empty string by default
    vip_exporter_port: 9650           # keepalived exporter listen port, 9650 by default

    #-----------------------------------------------------------------
    # HAPROXY
    #-----------------------------------------------------------------
    haproxy_enabled: true             # enable haproxy on this node?
    haproxy_clean: false              # cleanup all existing haproxy config?
    haproxy_reload: true              # reload haproxy after config?
    haproxy_auth_enabled: true        # enable authentication for haproxy admin page
    haproxy_admin_username: admin     # haproxy admin username, `admin` by default
    haproxy_admin_password: pigsty    # haproxy admin password, `pigsty` by default
    haproxy_exporter_port: 9101       # haproxy admin/exporter port, 9101 by default
    haproxy_client_timeout: 24h       # client side connection timeout, 24h by default
    haproxy_server_timeout: 24h       # server side connection timeout, 24h by default
    haproxy_services: []              # list of haproxy service to be exposed on node

    #-----------------------------------------------------------------
    # NODE_EXPORTER
    #-----------------------------------------------------------------
    node_exporter_enabled: true       # setup node_exporter on this node?
    node_exporter_port: 9100          # node exporter listen port, 9100 by default
    node_exporter_options: '--no-collector.softnet --no-collector.nvme --collector.tcpstat --collector.processes'

    #-----------------------------------------------------------------
    # VECTOR
    #-----------------------------------------------------------------
    vector_enabled: true              # enable vector log collector?
    vector_clean: false               # purge vector data dir during init?
    vector_data: /data/vector         # vector data dir, /data/vector by default
    vector_port: 9598                 # vector metrics port, 9598 by default
    vector_read_from: beginning       # vector read from beginning or end
    vector_log_endpoint: [ infra ]    # if defined, sending vector log to this endpoint.


    #================================================================#
    #                        VARS: DOCKER                            #
    #================================================================#
    docker_enabled: false             # enable docker on this node?
    docker_data: /data/docker         # docker data directory, /data/docker by default
    docker_storage_driver: overlay2   # docker storage driver, can be zfs, btrfs
    docker_cgroups_driver: systemd    # docker cgroup fs driver: cgroupfs,systemd
    docker_registry_mirrors: []       # docker registry mirror list
    docker_exporter_port: 9323        # docker metrics exporter port, 9323 by default
    docker_image: []                  # docker image to be pulled after bootstrap
    docker_image_cache: /tmp/docker/*.tgz # docker image cache glob pattern

    #================================================================#
    #                         VARS: ETCD                             #
    #================================================================#
    #etcd_seq: 1                      # etcd instance identifier, explicitly required
    etcd_cluster: etcd                # etcd cluster & group name, etcd by default
    etcd_safeguard: false             # prevent purging running etcd instance?
    etcd_clean: true                  # purging existing etcd during initialization?
    etcd_data: /data/etcd             # etcd data directory, /data/etcd by default
    etcd_port: 2379                   # etcd client port, 2379 by default
    etcd_peer_port: 2380              # etcd peer port, 2380 by default
    etcd_init: new                    # etcd initial cluster state, new or existing
    etcd_election_timeout: 1000       # etcd election timeout, 1000ms by default
    etcd_heartbeat_interval: 100      # etcd heartbeat interval, 100ms by default
    etcd_root_password: Etcd.Root     # etcd root password for RBAC, change it!


    #================================================================#
    #                         VARS: MINIO                            #
    #================================================================#
    #minio_seq: 1                     # minio instance identifier, REQUIRED
    minio_cluster: minio              # minio cluster identifier, REQUIRED
    minio_clean: false                # cleanup minio during init?, false by default
    minio_user: minio                 # minio os user, `minio` by default
    minio_https: true                 # use https for minio, true by default
    minio_node: '${minio_cluster}-${minio_seq}.pigsty' # minio node name pattern
    minio_data: '/data/minio'         # minio data dir(s), use {x...y} to specify multi drivers
    #minio_volumes:                   # minio data volumes, override defaults if specified
    minio_domain: sss.pigsty          # minio external domain name, `sss.pigsty` by default
    minio_port: 9000                  # minio service port, 9000 by default
    minio_admin_port: 9001            # minio console port, 9001 by default
    minio_access_key: minioadmin      # root access key, `minioadmin` by default
    minio_secret_key: S3User.MinIO    # root secret key, `S3User.MinIO` by default
    minio_extra_vars: ''              # extra environment variables
    minio_provision: true             # run minio provisioning tasks?
    minio_alias: sss                  # alias name for local minio deployment
    #minio_endpoint: https://sss.pigsty:9000 # if not specified, overwritten by defaults
    minio_buckets:                    # list of minio bucket to be created
      - { name: pgsql }
      - { name: meta ,versioning: true }
      - { name: data }
    minio_users:                      # list of minio user to be created
      - { access_key: pgbackrest  ,secret_key: S3User.Backup ,policy: pgsql }
      - { access_key: s3user_meta ,secret_key: S3User.Meta   ,policy: meta  }
      - { access_key: s3user_data ,secret_key: S3User.Data   ,policy: data  }


    #================================================================#
    #                         VARS: REDIS                            #
    #================================================================#
    #redis_cluster:        <CLUSTER> # redis cluster name, required identity parameter
    #redis_node: 1            <NODE> # redis node sequence number, node int id required
    #redis_instances: {}      <NODE> # redis instances definition on this redis node
    redis_fs_main: /data              # redis main data mountpoint, `/data` by default
    redis_exporter_enabled: true      # install redis exporter on redis nodes?
    redis_exporter_port: 9121         # redis exporter listen port, 9121 by default
    redis_exporter_options: ''        # cli args and extra options for redis exporter
    redis_safeguard: false            # prevent purging running redis instance?
    redis_clean: true                 # purging existing redis during init?
    redis_rmdata: true                # remove redis data when purging redis server?
    redis_mode: standalone            # redis mode: standalone,cluster,sentinel
    redis_conf: redis.conf            # redis config template path, except sentinel
    redis_bind_address: '0.0.0.0'     # redis bind address, empty string will use host ip
    redis_max_memory: 1GB             # max memory used by each redis instance
    redis_mem_policy: allkeys-lru     # redis memory eviction policy
    redis_password: ''                # redis password, empty string will disable password
    redis_rdb_save: ['1200 1']        # redis rdb save directives, disable with empty list
    redis_aof_enabled: false          # enable redis append only file?
    redis_rename_commands: {}         # rename redis dangerous commands
    redis_cluster_replicas: 1         # replica number for one master in redis cluster
    redis_sentinel_monitor: []        # sentinel master list, works on sentinel cluster only


    #================================================================#
    #                         VARS: PGSQL                            #
    #================================================================#

    #-----------------------------------------------------------------
    # PG_IDENTITY
    #-----------------------------------------------------------------
    pg_mode: pgsql          #CLUSTER  # pgsql cluster mode: pgsql,citus,gpsql,mssql,mysql,ivory,polar
    # pg_cluster:           #CLUSTER  # pgsql cluster name, required identity parameter
    # pg_seq: 0             #INSTANCE # pgsql instance seq number, required identity parameter
    # pg_role: replica      #INSTANCE # pgsql role, required, could be primary,replica,offline
    # pg_instances: {}      #INSTANCE # define multiple pg instances on node in `{port:ins_vars}` format
    # pg_upstream:          #INSTANCE # repl upstream ip addr for standby cluster or cascade replica
    # pg_shard:             #CLUSTER  # pgsql shard name, optional identity for sharding clusters
    # pg_group: 0           #CLUSTER  # pgsql shard index number, optional identity for sharding clusters
    # gp_role: master       #CLUSTER  # greenplum role of this cluster, could be master or segment
    pg_offline_query: false #INSTANCE # set to true to enable offline queries on this instance

    #-----------------------------------------------------------------
    # PG_BUSINESS
    #-----------------------------------------------------------------
    # postgres business object definition, overwrite in group vars
    pg_users: []                      # postgres business users
    pg_databases: []                  # postgres business databases
    pg_services: []                   # postgres business services
    pg_hba_rules: []                  # business hba rules for postgres
    pgb_hba_rules: []                 # business hba rules for pgbouncer
    # global credentials, overwrite in global vars
    pg_dbsu_password: ''              # dbsu password, empty string means no dbsu password by default
    pg_replication_username: replicator
    pg_replication_password: DBUser.Replicator
    pg_admin_username: dbuser_dba
    pg_admin_password: DBUser.DBA
    pg_monitor_username: dbuser_monitor
    pg_monitor_password: DBUser.Monitor

    #-----------------------------------------------------------------
    # PG_INSTALL
    #-----------------------------------------------------------------
    pg_dbsu: postgres                 # os dbsu name, postgres by default, better not change it
    pg_dbsu_uid: 543                  # os dbsu uid and gid, 26 for default postgres users and groups
    pg_dbsu_sudo: limit               # dbsu sudo privilege, none,limit,all,nopass. limit by default
    pg_dbsu_home: /var/lib/pgsql      # postgresql home directory, `/var/lib/pgsql` by default
    pg_dbsu_ssh_exchange: true        # exchange postgres dbsu ssh key among same pgsql cluster
    pg_version: 18                    # postgres major version to be installed, 18 by default
    pg_bin_dir: /usr/pgsql/bin        # postgres binary dir, `/usr/pgsql/bin` by default
    pg_log_dir: /pg/log/postgres      # postgres log dir, `/pg/log/postgres` by default
    pg_packages:                      # pg packages to be installed, alias can be used
      - pgsql-main pgsql-common
    pg_extensions: []                 # pg extensions to be installed, alias can be used

    #-----------------------------------------------------------------
    # PG_BOOTSTRAP
    #-----------------------------------------------------------------
    pg_data: /pg/data                 # postgres data directory, `/pg/data` by default
    pg_fs_main: /data/postgres        # postgres main data directory, `/data/postgres` by default
    pg_fs_backup: /data/backups       # postgres backup data directory, `/data/backups` by default
    pg_storage_type: SSD              # storage type for pg main data, SSD,HDD, SSD by default
    pg_dummy_filesize: 64MiB          # size of `/pg/dummy`, hold 64MB disk space for emergency use
    pg_listen: '0.0.0.0'              # postgres/pgbouncer listen addresses, comma separated list
    pg_port: 5432                     # postgres listen port, 5432 by default
    pg_localhost: /var/run/postgresql # postgres unix socket dir for localhost connection
    patroni_enabled: true             # if disabled, no postgres cluster will be created during init
    patroni_mode: default             # patroni working mode: default,pause,remove
    pg_namespace: /pg                 # top level key namespace in etcd, used by patroni & vip
    patroni_port: 8008                # patroni listen port, 8008 by default
    patroni_log_dir: /pg/log/patroni  # patroni log dir, `/pg/log/patroni` by default
    patroni_ssl_enabled: false        # secure patroni RestAPI communications with SSL?
    patroni_watchdog_mode: off        # patroni watchdog mode: automatic,required,off. off by default
    patroni_username: postgres        # patroni restapi username, `postgres` by default
    patroni_password: Patroni.API     # patroni restapi password, `Patroni.API` by default
    pg_etcd_password: ''              # etcd password for this pg cluster, '' to use pg_cluster
    pg_primary_db: postgres           # primary database name, used by citus,etc... ,postgres by default
    pg_parameters: {}                 # extra parameters in postgresql.auto.conf
    pg_files: []                      # extra files to be copied to postgres data directory (e.g. license)
    pg_conf: oltp.yml                 # config template: oltp,olap,crit,tiny. `oltp.yml` by default
    pg_max_conn: auto                 # postgres max connections, `auto` will use recommended value
    pg_shared_buffer_ratio: 0.25      # postgres shared buffers ratio, 0.25 by default, 0.1~0.4
    pg_io_method: worker              # io method for postgres, auto,fsync,worker,io_uring, worker by default
    pg_rto: 30                        # recovery time objective in seconds,  `30s` by default
    pg_rpo: 1048576                   # recovery point objective in bytes, `1MiB` at most by default
    pg_libs: 'pg_stat_statements, auto_explain'  # preloaded libraries, `pg_stat_statements,auto_explain` by default
    pg_delay: 0                       # replication apply delay for standby cluster leader
    pg_checksum: true                 # enable data checksum for postgres cluster?
    pg_encoding: UTF8                 # database cluster encoding, `UTF8` by default
    pg_locale: C                      # database cluster local, `C` by default
    pg_lc_collate: C                  # database cluster collate, `C` by default
    pg_lc_ctype: C                    # database character type, `C` by default
      #pgsodium_key: ""                 # pgsodium key, 64 hex digit, default to sha256(pg_cluster)
      #pgsodium_getkey_script: ""       # pgsodium getkey script path, pgsodium_getkey by default

    #-----------------------------------------------------------------
    # PG_PROVISION
    #-----------------------------------------------------------------
    pg_provision: true                # provision postgres cluster after bootstrap
    pg_init: pg-init                  # provision init script for cluster template, `pg-init` by default
    pg_default_roles:                 # default roles and users in postgres cluster
      - { name: dbrole_readonly  ,login: false ,comment: role for global read-only access     }
      - { name: dbrole_offline   ,login: false ,comment: role for restricted read-only access }
      - { name: dbrole_readwrite ,login: false ,roles: [dbrole_readonly] ,comment: role for global read-write access }
      - { name: dbrole_admin     ,login: false ,roles: [pg_monitor, dbrole_readwrite] ,comment: role for object creation }
      - { name: postgres     ,superuser: true  ,comment: system superuser }
      - { name: replicator ,replication: true  ,roles: [pg_monitor, dbrole_readonly] ,comment: system replicator }
      - { name: dbuser_dba   ,superuser: true  ,roles: [dbrole_admin]  ,pgbouncer: true ,pool_mode: session, pool_connlimit: 16 ,comment: pgsql admin user }
      - { name: dbuser_monitor ,roles: [pg_monitor] ,pgbouncer: true ,parameters: {log_min_duration_statement: 1000 } ,pool_mode: session ,pool_connlimit: 8 ,comment: pgsql monitor user }
    pg_default_privileges:            # default privileges when created by admin user
      - GRANT USAGE      ON SCHEMAS   TO dbrole_readonly
      - GRANT SELECT     ON TABLES    TO dbrole_readonly
      - GRANT SELECT     ON SEQUENCES TO dbrole_readonly
      - GRANT EXECUTE    ON FUNCTIONS TO dbrole_readonly
      - GRANT USAGE      ON SCHEMAS   TO dbrole_offline
      - GRANT SELECT     ON TABLES    TO dbrole_offline
      - GRANT SELECT     ON SEQUENCES TO dbrole_offline
      - GRANT EXECUTE    ON FUNCTIONS TO dbrole_offline
      - GRANT INSERT     ON TABLES    TO dbrole_readwrite
      - GRANT UPDATE     ON TABLES    TO dbrole_readwrite
      - GRANT DELETE     ON TABLES    TO dbrole_readwrite
      - GRANT USAGE      ON SEQUENCES TO dbrole_readwrite
      - GRANT UPDATE     ON SEQUENCES TO dbrole_readwrite
      - GRANT TRUNCATE   ON TABLES    TO dbrole_admin
      - GRANT REFERENCES ON TABLES    TO dbrole_admin
      - GRANT TRIGGER    ON TABLES    TO dbrole_admin
      - GRANT CREATE     ON SCHEMAS   TO dbrole_admin
    pg_default_schemas: [ monitor ]   # default schemas to be created
    pg_default_extensions:            # default extensions to be created
      - { name: pg_stat_statements ,schema: monitor }
      - { name: pgstattuple        ,schema: monitor }
      - { name: pg_buffercache     ,schema: monitor }
      - { name: pageinspect        ,schema: monitor }
      - { name: pg_prewarm         ,schema: monitor }
      - { name: pg_visibility      ,schema: monitor }
      - { name: pg_freespacemap    ,schema: monitor }
      - { name: postgres_fdw       ,schema: public  }
      - { name: file_fdw           ,schema: public  }
      - { name: btree_gist         ,schema: public  }
      - { name: btree_gin          ,schema: public  }
      - { name: pg_trgm            ,schema: public  }
      - { name: intagg             ,schema: public  }
      - { name: intarray           ,schema: public  }
      - { name: pg_repack }
    pg_reload: true                   # reload postgres after hba changes
    pg_default_hba_rules:             # postgres default host-based authentication rules, order by `order`
      - {user: '${dbsu}'    ,db: all         ,addr: local     ,auth: ident ,title: 'dbsu access via local os user ident'  ,order: 100}
      - {user: '${dbsu}'    ,db: replication ,addr: local     ,auth: ident ,title: 'dbsu replication from local os ident' ,order: 150}
      - {user: '${repl}'    ,db: replication ,addr: localhost ,auth: pwd   ,title: 'replicator replication from localhost',order: 200}
      - {user: '${repl}'    ,db: replication ,addr: intra     ,auth: pwd   ,title: 'replicator replication from intranet' ,order: 250}
      - {user: '${repl}'    ,db: postgres    ,addr: intra     ,auth: pwd   ,title: 'replicator postgres db from intranet' ,order: 300}
      - {user: '${monitor}' ,db: all         ,addr: localhost ,auth: pwd   ,title: 'monitor from localhost with password' ,order: 350}
      - {user: '${monitor}' ,db: all         ,addr: infra     ,auth: pwd   ,title: 'monitor from infra host with password',order: 400}
      - {user: '${admin}'   ,db: all         ,addr: infra     ,auth: ssl   ,title: 'admin @ infra nodes with pwd & ssl'   ,order: 450}
      - {user: '${admin}'   ,db: all         ,addr: world     ,auth: ssl   ,title: 'admin @ everywhere with ssl & pwd'    ,order: 500}
      - {user: '+dbrole_readonly',db: all    ,addr: localhost ,auth: pwd   ,title: 'pgbouncer read/write via local socket',order: 550}
      - {user: '+dbrole_readonly',db: all    ,addr: intra     ,auth: pwd   ,title: 'read/write biz user via password'     ,order: 600}
      - {user: '+dbrole_offline' ,db: all    ,addr: intra     ,auth: pwd   ,title: 'allow etl offline tasks from intranet',order: 650}
    pgb_default_hba_rules:            # pgbouncer default host-based authentication rules, order by `order`
      - {user: '${dbsu}'    ,db: pgbouncer   ,addr: local     ,auth: peer  ,title: 'dbsu local admin access with os ident',order: 100}
      - {user: 'all'        ,db: all         ,addr: localhost ,auth: pwd   ,title: 'allow all user local access with pwd' ,order: 150}
      - {user: '${monitor}' ,db: pgbouncer   ,addr: intra     ,auth: pwd   ,title: 'monitor access via intranet with pwd' ,order: 200}
      - {user: '${monitor}' ,db: all         ,addr: world     ,auth: deny  ,title: 'reject all other monitor access addr' ,order: 250}
      - {user: '${admin}'   ,db: all         ,addr: intra     ,auth: pwd   ,title: 'admin access via intranet with pwd'   ,order: 300}
      - {user: '${admin}'   ,db: all         ,addr: world     ,auth: deny  ,title: 'reject all other admin access addr'   ,order: 350}
      - {user: 'all'        ,db: all         ,addr: intra     ,auth: pwd   ,title: 'allow all user intra access with pwd' ,order: 400}

    #-----------------------------------------------------------------
    # PG_BACKUP
    #-----------------------------------------------------------------
    pgbackrest_enabled: true          # enable pgbackrest on pgsql host?
    pgbackrest_log_dir: /pg/log/pgbackrest # pgbackrest log dir, `/pg/log/pgbackrest` by default
    pgbackrest_method: local          # pgbackrest repo method: local,minio,[user-defined...]
    pgbackrest_init_backup: true      # take a full backup after pgbackrest is initialized?
    pgbackrest_repo:                  # pgbackrest repo: https://pgbackrest.org/configuration.html#section-repository
      local:                          # default pgbackrest repo with local posix fs
        path: /pg/backup              # local backup directory, `/pg/backup` by default
        retention_full_type: count    # retention full backups by count
        retention_full: 2             # keep 2, at most 3 full backups when using local fs repo
      minio:                          # optional minio repo for pgbackrest
        type: s3                      # minio is s3-compatible, so s3 is used
        s3_endpoint: sss.pigsty       # minio endpoint domain name, `sss.pigsty` by default
        s3_region: us-east-1          # minio region, us-east-1 by default, useless for minio
        s3_bucket: pgsql              # minio bucket name, `pgsql` by default
        s3_key: pgbackrest            # minio user access key for pgbackrest
        s3_key_secret: S3User.Backup  # minio user secret key for pgbackrest
        s3_uri_style: path            # use path style uri for minio rather than host style
        path: /pgbackrest             # minio backup path, default is `/pgbackrest`
        storage_port: 9000            # minio port, 9000 by default
        storage_ca_file: /etc/pki/ca.crt  # minio ca file path, `/etc/pki/ca.crt` by default
        block: y                      # Enable block incremental backup
        bundle: y                     # bundle small files into a single file
        bundle_limit: 20MiB           # Limit for file bundles, 20MiB for object storage
        bundle_size: 128MiB           # Target size for file bundles, 128MiB for object storage
        cipher_type: aes-256-cbc      # enable AES encryption for remote backup repo
        cipher_pass: pgBackRest       # AES encryption password, default is 'pgBackRest'
        retention_full_type: time     # retention full backup by time on minio repo
        retention_full: 14            # keep full backup for the the last 14 days

    #-----------------------------------------------------------------
    # PG_ACCESS
    #-----------------------------------------------------------------
    pgbouncer_enabled: true           # if disabled, pgbouncer will not be launched on pgsql host
    pgbouncer_port: 6432              # pgbouncer listen port, 6432 by default
    pgbouncer_log_dir: /pg/log/pgbouncer  # pgbouncer log dir, `/pg/log/pgbouncer` by default
    pgbouncer_auth_query: false       # query postgres to retrieve unlisted business users?
    pgbouncer_poolmode: transaction   # pooling mode: transaction,session,statement, transaction by default
    pgbouncer_sslmode: disable        # pgbouncer client ssl mode, disable by default
    pgbouncer_ignore_param: [ extra_float_digits, application_name, TimeZone, DateStyle, IntervalStyle, search_path ]
    pg_weight: 100          #INSTANCE # relative load balance weight in service, 100 by default, 0-255
    pg_service_provider: ''           # dedicate haproxy node group name, or empty string for local nodes by default
    pg_default_service_dest: pgbouncer # default service destination if svc.dest='default'
    pg_default_services:              # postgres default service definitions
      - { name: primary ,port: 5433 ,dest: default  ,check: /primary   ,selector: "[]" }
      - { name: replica ,port: 5434 ,dest: default  ,check: /read-only ,selector: "[]" , backup: "[? pg_role == `primary` || pg_role == `offline` ]" }
      - { name: default ,port: 5436 ,dest: postgres ,check: /primary   ,selector: "[]" }
      - { name: offline ,port: 5438 ,dest: postgres ,check: /replica   ,selector: "[? pg_role == `offline` || pg_offline_query ]" , backup: "[? pg_role == `replica` && !pg_offline_query]"}
    pg_vip_enabled: false             # enable a l2 vip for pgsql primary? false by default
    pg_vip_address: 127.0.0.1/24      # vip address in `<ipv4>/<mask>` format, require if vip is enabled
    pg_vip_interface: eth0            # vip network interface to listen, eth0 by default
    pg_dns_suffix: ''                 # pgsql dns suffix, '' by default
    pg_dns_target: auto               # auto, primary, vip, none, or ad hoc ip

    #-----------------------------------------------------------------
    # PG_MONITOR
    #-----------------------------------------------------------------
    pg_exporter_enabled: true              # enable pg_exporter on pgsql hosts?
    pg_exporter_config: pg_exporter.yml    # pg_exporter configuration file name
    pg_exporter_cache_ttls: '1,10,60,300'  # pg_exporter collector ttl stage in seconds, '1,10,60,300' by default
    pg_exporter_port: 9630                 # pg_exporter listen port, 9630 by default
    pg_exporter_params: 'sslmode=disable'  # extra url parameters for pg_exporter dsn
    pg_exporter_url: ''                    # overwrite auto-generate pg dsn if specified
    pg_exporter_auto_discovery: true       # enable auto database discovery? enabled by default
    pg_exporter_exclude_database: 'template0,template1,postgres' # csv of database that WILL NOT be monitored during auto-discovery
    pg_exporter_include_database: ''       # csv of database that WILL BE monitored during auto-discovery
    pg_exporter_connect_timeout: 200       # pg_exporter connect timeout in ms, 200 by default
    pg_exporter_options: ''                # overwrite extra options for pg_exporter
    pgbouncer_exporter_enabled: true       # enable pgbouncer_exporter on pgsql hosts?
    pgbouncer_exporter_port: 9631          # pgbouncer_exporter listen port, 9631 by default
    pgbouncer_exporter_url: ''             # overwrite auto-generate pgbouncer dsn if specified
    pgbouncer_exporter_options: ''         # overwrite extra options for pgbouncer_exporter
    pgbackrest_exporter_enabled: true      # enable pgbackrest_exporter on pgsql hosts?
    pgbackrest_exporter_port: 9854         # pgbackrest_exporter listen port, 9854 by default
    pgbackrest_exporter_options: >
      --collect.interval=120
      --log.level=info

    #-----------------------------------------------------------------
    # PG_REMOVE
    #-----------------------------------------------------------------
    pg_safeguard: false               # stop pg_remove running if pg_safeguard is enabled, false by default
    pg_rm_data: true                  # remove postgres data during remove? true by default
    pg_rm_backup: true                # remove pgbackrest backup during primary remove? true by default
    pg_rm_pkg: true                   # uninstall postgres packages during remove? true by default

...

配置解读

demo/debian 模板是针对 Debian 和 Ubuntu 发行版优化的配置。

支持的发行版

  • Debian 12 (Bookworm)
  • Debian 13 (Trixie)
  • Ubuntu 22.04 LTS (Jammy)
  • Ubuntu 24.04 LTS (Noble)

关键特性

  • 使用 PGDG APT 软件源
  • 针对 APT 包管理器优化
  • 支持 Debian/Ubuntu 特定的软件包名称

适用场景

  • 云服务器(Ubuntu 广泛使用)
  • 容器环境(Debian 常用作基础镜像)
  • 开发测试环境

33 - demo/demo

Pigsty 公开演示站点配置,展示如何配置 SSL 证书、暴露域名、安装全部扩展

demo/demo 配置模板是 Pigsty 公开演示站点使用的配置文件,展示了如何对外暴露网站、配置 SSL 证书、安装全部扩展插件。

如果您希望在云服务器上搭建自己的公开服务,可以参考此配置模板。


配置概览

  • 配置名称: demo/demo
  • 节点数量: 单节点
  • 配置说明:Pigsty 公开演示站点配置
  • 适用系统:el8, el9, el10, d12, d13, u22, u24
  • 适用架构:x86_64
  • 相关配置:metarich

启用方式:

./configure -c demo/demo [-i <primary_ip>]

主要特性

此模板在 meta 基础上进行了以下增强:

  • 配置 SSL 证书和自定义域名(如 pigsty.cc
  • 下载并安装 PostgreSQL 18 所有可用扩展
  • 启用 Docker 并配置镜像加速
  • 部署 MinIO 对象存储
  • 预置多个业务数据库和用户
  • 添加 Redis 主从实例示例
  • 添加 FerretDB MongoDB 兼容集群
  • 添加 Kafka 样例集群

配置内容

源文件地址:pigsty/conf/demo/demo.yml

---
#==============================================================#
# File      :   demo.yml
# Desc      :   Pigsty Public Demo Configuration
# Ctime     :   2020-05-22
# Mtime     :   2025-12-12
# Docs      :   https://doc.pgsty.com/config
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#


all:
  children:

    # infra cluster for proxy, monitor, alert, etc..
    infra:
      hosts: { 10.10.10.10: { infra_seq: 1 } }
      vars:
        nodename: pigsty.cc       # overwrite the default hostname
        node_id_from_pg: false    # do not use the pg identity as hostname
        docker_enabled: true      # enable docker on this node
        docker_registry_mirrors: ["https://mirror.ccs.tencentyun.com", "https://docker.1ms.run"]
        # ./pgsql-monitor.yml -l infra     # monitor 'external' PostgreSQL instance
        pg_exporters:             # treat local postgres as RDS for demonstration purpose
          20001: { pg_cluster: pg-foo, pg_seq: 1, pg_host: 10.10.10.10 }
          #20002: { pg_cluster: pg-bar, pg_seq: 1, pg_host: 10.10.10.11 , pg_port: 5432 }
          #20003: { pg_cluster: pg-bar, pg_seq: 2, pg_host: 10.10.10.12 , pg_exporter_url: 'postgres://dbuser_monitor:[email protected]:5432/postgres?sslmode=disable' }
          #20004: { pg_cluster: pg-bar, pg_seq: 3, pg_host: 10.10.10.13 , pg_monitor_username: dbuser_monitor, pg_monitor_password: DBUser.Monitor }

    # etcd cluster for ha postgres
    etcd: { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }

    # minio cluster, s3 compatible object storage
    minio: { hosts: { 10.10.10.10: { minio_seq: 1 } }, vars: { minio_cluster: minio } }

    # postgres example cluster: pg-meta
    pg-meta:
      hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - {name: dbuser_meta       ,password: DBUser.Meta       ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: pigsty admin user }
          - {name: dbuser_view       ,password: DBUser.Viewer     ,pgbouncer: true ,roles: [dbrole_readonly] ,comment: read-only viewer for meta database }
          - {name: dbuser_grafana    ,password: DBUser.Grafana    ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for grafana database    }
          - {name: dbuser_bytebase   ,password: DBUser.Bytebase   ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for bytebase database   }
          - {name: dbuser_kong       ,password: DBUser.Kong       ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for kong api gateway    }
          - {name: dbuser_gitea      ,password: DBUser.Gitea      ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for gitea service       }
          - {name: dbuser_wiki       ,password: DBUser.Wiki       ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for wiki.js service     }
          - {name: dbuser_noco       ,password: DBUser.Noco       ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for nocodb service      }
          - {name: dbuser_odoo       ,password: DBUser.Odoo       ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for odoo service ,createdb: true } #,superuser: true}
          - {name: dbuser_mattermost ,password: DBUser.MatterMost ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for mattermost ,createdb: true }
        pg_databases:
          - {name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [pigsty] ,extensions: [{name: vector},{name: postgis},{name: timescaledb}]}
          - {name: grafana  ,owner: dbuser_grafana  ,revokeconn: true ,comment: grafana primary database  }
          - {name: bytebase ,owner: dbuser_bytebase ,revokeconn: true ,comment: bytebase primary database }
          - {name: kong     ,owner: dbuser_kong     ,revokeconn: true ,comment: kong api gateway database }
          - {name: gitea    ,owner: dbuser_gitea    ,revokeconn: true ,comment: gitea meta database }
          - {name: wiki     ,owner: dbuser_wiki     ,revokeconn: true ,comment: wiki meta database  }
          - {name: noco     ,owner: dbuser_noco     ,revokeconn: true ,comment: nocodb database     }
          #- {name: odoo     ,owner: dbuser_odoo     ,revokeconn: true ,comment: odoo main database  }
          - {name: mattermost ,owner: dbuser_mattermost ,revokeconn: true ,comment: mattermost main database }
        pg_hba_rules:
          - {user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes'}
        pg_libs: 'timescaledb,pg_stat_statements, auto_explain'  # add timescaledb to shared_preload_libraries
        pg_extensions: # extensions to be installed on this cluster
          - timescaledb timescaledb_toolkit pg_timeseries periods temporal_tables emaj table_version pg_cron pg_task pg_later pg_background
          - postgis pgrouting pointcloud pg_h3 q3c ogr_fdw geoip pg_polyline pg_geohash #mobilitydb
          - pgvector vchord pgvectorscale pg_vectorize pg_similarity smlar pg_summarize pg_tiktoken pg4ml #pgml
          - pg_search pgroonga pg_bigm zhparser pg_bestmatch vchord_bm25 hunspell
          - citus hydra pg_analytics pg_duckdb pg_mooncake duckdb_fdw pg_parquet pg_fkpart pg_partman plproxy #pg_strom
          - age hll rum pg_graphql pg_jsonschema jsquery pg_hint_plan hypopg index_advisor pg_plan_filter imgsmlr pg_ivm pg_incremental pgmq pgq pg_cardano omnigres #rdkit
          - pg_tle plv8 pllua plprql pldebugger plpgsql_check plprofiler plsh pljava #plr #pgtap #faker #dbt2
          - pg_prefix pg_semver pgunit pgpdf pglite_fusion md5hash asn1oid roaringbitmap pgfaceting pgsphere pg_country pg_xenophile pg_currency pg_collection pgmp numeral pg_rational pguint pg_uint128 hashtypes ip4r pg_uri pgemailaddr pg_acl timestamp9 chkpass #pg_duration #debversion #pg_rrule
          - pg_gzip pg_bzip pg_zstd pg_http pg_net pg_curl pgjq pgjwt pg_smtp_client pg_html5_email_address url_encode pgsql_tweaks pg_extra_time pgpcre icu_ext pgqr pg_protobuf envvar floatfile pg_readme ddl_historization data_historization pg_schedoc pg_hashlib pg_xxhash shacrypt cryptint pg_ecdsa pgsparql
          - pg_idkit pg_uuidv7 permuteseq pg_hashids sequential_uuids topn quantile lower_quantile count_distinct omnisketch ddsketch vasco pgxicor tdigest first_last_agg extra_window_functions floatvec aggs_for_vecs aggs_for_arrays pg_arraymath pg_math pg_random pg_base36 pg_base62 pg_base58 pg_financial
          - pg_repack pg_squeeze pg_dirtyread pgfincore pg_cooldown pg_ddlx pg_prioritize pg_checksums pg_readonly pg_upless pg_permissions pgautofailover pg_catcheck preprepare pgcozy pg_orphaned pg_crash pg_cheat_funcs pg_fio pg_savior safeupdate pg_drop_events table_log #pgagent #pgpool
          - pg_profile pg_tracing pg_show_plans pg_stat_kcache pg_stat_monitor pg_qualstats pg_store_plans pg_track_settings pg_wait_sampling system_stats pg_meta pgnodemx pg_sqlog bgw_replstatus pgmeminfo toastinfo pg_explain_ui pg_relusage pagevis powa
          - passwordcheck supautils pgsodium pg_vault pg_session_jwt pg_anon pg_tde pgsmcrypto pgaudit pgauditlogtofile pg_auth_mon credcheck pgcryptokey pg_jobmon logerrors login_hook set_user pg_snakeoil pgextwlist pg_auditor sslutils pg_noset
          - wrappers multicorn odbc_fdw jdbc_fdw mysql_fdw tds_fdw sqlite_fdw pgbouncer_fdw mongo_fdw redis_fdw pg_redis_pubsub kafka_fdw hdfs_fdw firebird_fdw aws_s3 log_fdw #oracle_fdw #db2_fdw
          - documentdb orafce pgtt session_variable pg_statement_rollback pg_dbms_metadata pg_dbms_lock pgmemcache #pg_dbms_job #wiltondb
          - pglogical pglogical_ticker pgl_ddl_deploy pg_failover_slots db_migrator wal2json wal2mongo decoderbufs decoder_raw mimeo pg_fact_loader pg_bulkload #repmgr

    redis-ms: # redis classic primary & replica
      hosts: { 10.10.10.10: { redis_node: 1 , redis_instances: { 6379: { }, 6380: { replica_of: '10.10.10.10 6379' }, 6381: { replica_of: '10.10.10.10 6379' } } } }
      vars: { redis_cluster: redis-ms ,redis_password: 'redis.ms' ,redis_max_memory: 64MB }

    # ./mongo.yml -l pg-mongo
    pg-mongo:
      hosts: { 10.10.10.10: { mongo_seq: 1 } }
      vars:
        mongo_cluster: pg-mongo
        mongo_pgurl: 'postgres://dbuser_meta:[email protected]:5432/grafana'

    # ./kafka.yml -l kf-main
    kf-main:
      hosts: { 10.10.10.10: { kafka_seq: 1, kafka_role: controller } }
      vars:
        kafka_cluster: kf-main
        kafka_peer_port: 9093


  vars:                               # global variables
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: china                     # upstream mirror region: default|china|europe

    infra_portal:                     # infra services exposed via portal
      home         : { domain: i.pigsty }     # default domain name
      cc           : { domain: pigsty.cc      ,path:     "/www/pigsty.cc"   ,cert: /etc/cert/pigsty.cc.crt ,key: /etc/cert/pigsty.cc.key }
      minio        : { domain: m.pigsty.cc    ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }
      postgrest    : { domain: api.pigsty.cc  ,endpoint: "127.0.0.1:8884"   }
      pgadmin      : { domain: adm.pigsty.cc  ,endpoint: "127.0.0.1:8885"   }
      pgweb        : { domain: cli.pigsty.cc  ,endpoint: "127.0.0.1:8886"   }
      bytebase     : { domain: ddl.pigsty.cc  ,endpoint: "127.0.0.1:8887"   }
      jupyter      : { domain: lab.pigsty.cc  ,endpoint: "127.0.0.1:8888", websocket: true }
      gitea        : { domain: git.pigsty.cc  ,endpoint: "127.0.0.1:8889" }
      wiki         : { domain: wiki.pigsty.cc ,endpoint: "127.0.0.1:9002" }
      noco         : { domain: noco.pigsty.cc ,endpoint: "127.0.0.1:9003" }
      supa         : { domain: supa.pigsty.cc ,endpoint: "10.10.10.10:8000" ,websocket: true }
      dify         : { domain: dify.pigsty.cc ,endpoint: "10.10.10.10:8001" ,websocket: true }
      odoo         : { domain: odoo.pigsty.cc ,endpoint: "127.0.0.1:8069"   ,websocket: true }
      mm           : { domain: mm.pigsty.cc   ,endpoint: "10.10.10.10:8065" ,websocket: true }
    # scp -r ~/pgsty/cc/cert/*       pj:/etc/cert/       # copy https certs
    # scp -r ~/dev/pigsty.cc/public  pj:/www/pigsty.cc   # copy pigsty.cc website


    node_etc_hosts: [ "${admin_ip} sss.pigsty" ]
    node_timezone: Asia/Hong_Kong
    node_ntp_servers:
      - pool cn.pool.ntp.org iburst
      - pool ${admin_ip} iburst       # assume non-admin nodes does not have internet access
    pgbackrest_enabled: false         # do not take backups since this is disposable demo env
    #prometheus_options: '--storage.tsdb.retention.time=15d' # prometheus extra server options
    prometheus_options: '--storage.tsdb.retention.size=3GB' # keep 3GB data at most on demo env

    # install all postgresql18 extensions
    pg_version: 18                    # default postgres version
    repo_extra_packages: [ pg18-core ,pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]
    pg_extensions: [pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl ] #,pg18-olap]

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

配置解读

demo/demo 模板是 Pigsty 的 公开演示配置,展示了完整的生产级部署示例。

关键特性

  • 配置 HTTPS 证书和自定义域名
  • 安装所有可用的 PostgreSQL 扩展
  • 集成 Redis、FerretDB、Kafka 等组件
  • 配置 Docker 镜像加速

适用场景

  • 搭建公开演示站点
  • 需要完整功能展示的场景
  • 学习 Pigsty 高级配置

注意事项

  • 需要准备 SSL 证书文件
  • 需要配置 DNS 解析
  • 部分扩展在 ARM64 架构不可用

34 - demo/minio

四节点 x 四盘位的高可用多节点多盘 MinIO 集群演示

demo/minio 配置模板演示了如何部署一套四节点 x 四盘位、总计十六盘的高可用 MinIO 集群,提供 S3 兼容的对象存储服务。

更多教程,请参考 MINIO 模块文档。


配置概览

  • 配置名称: demo/minio
  • 节点数量: 四节点
  • 配置说明:高可用多节点多盘 MinIO 集群演示
  • 适用系统:el8, el9, el10, d12, d13, u22, u24
  • 适用架构:x86_64, aarch64
  • 相关配置:meta

启用方式:

./configure -c demo/minio

备注:这是一个四节点模版,您需要在生成配置后修改其他三个节点的 IP 地址


配置内容

源文件地址:pigsty/conf/demo/minio.yml

---
#==============================================================#
# File      :   minio.yml
# Desc      :   pigsty: 4 node x 4 disk MNMD minio clusters
# Ctime     :   2023-01-07
# Mtime     :   2025-12-12
# Docs      :   https://doc.pgsty.com/config
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# One pass installation with:
# ./deploy.yml
#==============================================================#
# 1.  minio-1 @ 10.10.10.10:9000 -  - (9002) svc <-x  10.10.10.9:9002
# 2.  minio-2 @ 10.10.10.11:9000 -xx- (9002) svc <-x <----------------
# 3.  minio-3 @ 10.10.10.12:9000 -xx- (9002) svc <-x  sss.pigsty:9002
# 4.  minio-4 @ 10.10.10.12:9000 -  - (9002) svc <-x  (intranet dns)
#==============================================================#
# use minio load balancer service (9002) instead of direct access (9000)
# mcli alias set sss https://sss.pigsty:9002 minioadmin S3User.MinIO
#==============================================================#
# https://min.io/docs/minio/linux/operations/install-deploy-manage/deploy-minio-multi-node-multi-drive.html
# MINIO_VOLUMES="https://minio-{1...4}.pigsty:9000/data{1...4}/minio"


all:
  children:

    # infra cluster for proxy, monitor, alert, etc..
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }

    # minio cluster with 4 nodes and 4 drivers per node
    minio:
      hosts:
        10.10.10.10: { minio_seq: 1 , nodename: minio-1 }
        10.10.10.11: { minio_seq: 2 , nodename: minio-2 }
        10.10.10.12: { minio_seq: 3 , nodename: minio-3 }
        10.10.10.13: { minio_seq: 4 , nodename: minio-4 }
      vars:
        minio_cluster: minio
        minio_data: '/data{1...4}'
        minio_buckets:                    # list of minio bucket to be created
          - { name: pgsql }
          - { name: meta ,versioning: true }
          - { name: data }
        minio_users:                      # list of minio user to be created
          - { access_key: pgbackrest  ,secret_key: S3User.Backup ,policy: pgsql }
          - { access_key: s3user_meta ,secret_key: S3User.Meta   ,policy: meta  }
          - { access_key: s3user_data ,secret_key: S3User.Data   ,policy: data  }

        # bind a node l2 vip (10.10.10.9) to minio cluster (optional)
        node_cluster: minio
        vip_enabled: true
        vip_vrid: 128
        vip_address: 10.10.10.9
        vip_interface: eth1

        # expose minio service with haproxy on all nodes
        haproxy_services:
          - name: minio                    # [REQUIRED] service name, unique
            port: 9002                     # [REQUIRED] service port, unique
            balance: leastconn             # [OPTIONAL] load balancer algorithm
            options:                       # [OPTIONAL] minio health check
              - option httpchk
              - option http-keep-alive
              - http-check send meth OPTIONS uri /minio/health/live
              - http-check expect status 200
            servers:
              - { name: minio-1 ,ip: 10.10.10.10 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
              - { name: minio-2 ,ip: 10.10.10.11 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
              - { name: minio-3 ,ip: 10.10.10.12 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
              - { name: minio-4 ,ip: 10.10.10.13 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }

  vars:
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default|china|europe
    infra_portal:                     # infra services exposed via portal
      home : { domain: i.pigsty }     # default domain name

      # domain names to access minio web console via nginx web portal (optional)
      minio        : { domain: m.pigsty     ,endpoint: "10.10.10.10:9001" ,scheme: https ,websocket: true }
      minio10      : { domain: m10.pigsty   ,endpoint: "10.10.10.10:9001" ,scheme: https ,websocket: true }
      minio11      : { domain: m11.pigsty   ,endpoint: "10.10.10.11:9001" ,scheme: https ,websocket: true }
      minio12      : { domain: m12.pigsty   ,endpoint: "10.10.10.12:9001" ,scheme: https ,websocket: true }
      minio13      : { domain: m13.pigsty   ,endpoint: "10.10.10.13:9001" ,scheme: https ,websocket: true }

    minio_endpoint: https://sss.pigsty:9002   # explicit overwrite minio endpoint with haproxy port
    node_etc_hosts: ["10.10.10.9 sss.pigsty"] # domain name to access minio from all nodes (required)

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
...

配置解读

demo/minio 模板是 MinIO 生产级部署的参考配置,展示了多节点多盘 (MNMD) 架构。

关键特性

  • 多节点多盘架构:4 节点 × 4 盘 = 16 盘纠删码组
  • L2 VIP 高可用:通过 Keepalived 绑定虚拟 IP
  • HAProxy 负载均衡:9002 端口统一访问入口
  • 细粒度权限:为不同应用创建独立用户和存储桶

访问方式

# 使用 mcli 配置 MinIO 别名(通过 HAProxy 负载均衡)
mcli alias set sss https://sss.pigsty:9002 minioadmin S3User.MinIO

# 列出存储桶
mcli ls sss/

# 使用控制台
# 访问 https://m.pigsty 或 https://m10-m13.pigsty

适用场景

  • 需要 S3 兼容对象存储的环境
  • PostgreSQL 备份存储(pgBackRest 远程仓库)
  • 大数据和 AI 工作负载的数据湖
  • 需要高可用对象存储的生产环境

注意事项

  • 每个节点需要准备 4 块独立磁盘挂载到 /data1 - /data4
  • 生产环境建议至少 4 节点以实现纠删码冗余
  • VIP 需要正确配置网络接口(vip_interface

35 - build/oss

Pigsty 开源版离线软件包构建环境配置

build/oss 配置模板是 Pigsty 开源版离线软件包的构建环境配置,用于在多个操作系统上批量构建离线安装包。

此配置仅供开发者和贡献者使用。


配置概览

  • 配置名称: build/oss
  • 节点数量: 六节点(el9, el10, d12, d13, u22, u24)
  • 配置说明:Pigsty 开源版离线软件包构建环境
  • 适用系统:el9, el10, d12, d13, u22, u24
  • 适用架构:x86_64

启用方式:

cp conf/build/oss.yml pigsty.yml

备注:这是一个固定 IP 地址的构建模板,仅供内部使用


配置内容

源文件地址:pigsty/conf/build/oss.yml

---
#==============================================================#
# File      :   oss.yml
# Desc      :   Pigsty 3-node building env (PG18)
# Ctime     :   2024-10-22
# Mtime     :   2025-12-12
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

all:
  vars:
    version: v4.0.0
    admin_ip: 10.10.10.24
    region: china
    etcd_clean: true
    proxy_env:
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.myqcloud.com,*.tsinghua.edu.cn,*.pigsty.cc"

    # building spec
    pg_version: 18
    cache_pkg_dir: 'dist/${version}'
    repo_modules: infra,node,pgsql
    repo_packages: [ node-bootstrap, infra-package, infra-addons, node-package1, node-package2, pgsql-utility, extra-modules ]
    repo_extra_packages: [pg18-core ,pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]
    pg_extensions:                 [ pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap, pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]

  children:
    #el8:  { hosts: { 10.10.10.8:  { pg_cluster: el8 ,pg_seq: 1 ,pg_role: primary }}}
    el9:  { hosts: { 10.10.10.9:  { pg_cluster: el9  ,pg_seq: 1 ,pg_role: primary }}}
    el10: { hosts: { 10.10.10.10: { pg_cluster: el10 ,pg_seq: 1 ,pg_role: primary }}}
    d12:  { hosts: { 10.10.10.12: { pg_cluster: d12  ,pg_seq: 1 ,pg_role: primary }}}
    d13:  { hosts: { 10.10.10.13: { pg_cluster: d13  ,pg_seq: 1 ,pg_role: primary }}}
    u22:  { hosts: { 10.10.10.22: { pg_cluster: u22  ,pg_seq: 1 ,pg_role: primary }}}
    u24:  { hosts: { 10.10.10.24: { pg_cluster: u24  ,pg_seq: 1 ,pg_role: primary }}}
    etcd: { hosts: { 10.10.10.24:  { etcd_seq: 1 }}, vars: { etcd_cluster: etcd    }}
    infra:
      hosts:
        #10.10.10.8:  { infra_seq: 1, admin_ip: 10.10.10.8  ,ansible_host: el8  } #, ansible_python_interpreter: /usr/bin/python3.12 }
        10.10.10.9:  { infra_seq: 2, admin_ip: 10.10.10.9  ,ansible_host: el9  }
        10.10.10.10: { infra_seq: 3, admin_ip: 10.10.10.10 ,ansible_host: el10 }
        10.10.10.12: { infra_seq: 4, admin_ip: 10.10.10.12 ,ansible_host: d12  }
        10.10.10.13: { infra_seq: 5, admin_ip: 10.10.10.13 ,ansible_host: d13  }
        10.10.10.22: { infra_seq: 6, admin_ip: 10.10.10.22 ,ansible_host: u22  }
        10.10.10.24: { infra_seq: 7, admin_ip: 10.10.10.24 ,ansible_host: u24  }
      vars: { node_conf: oltp }

...

配置解读

build/oss 模板是 Pigsty 开源版离线软件包的构建配置。

构建内容

  • PostgreSQL 18 及所有分类扩展包
  • 基础设施软件包(Prometheus、Grafana、Nginx 等)
  • 节点软件包(监控代理、工具等)
  • 额外模块(extra-modules)

支持的操作系统

  • EL9 (Rocky/Alma/RHEL 9)
  • EL10 (Rocky 10 / RHEL 10)
  • Debian 12 (Bookworm)
  • Debian 13 (Trixie)
  • Ubuntu 22.04 (Jammy)
  • Ubuntu 24.04 (Noble)

构建流程

# 1. 准备构建环境
cp conf/build/oss.yml pigsty.yml

# 2. 在各节点上下载软件包
./infra.yml -t repo_build

# 3. 打包离线安装包
make cache

适用场景

  • Pigsty 开发者构建新版本
  • 贡献者测试新扩展
  • 企业用户自定义离线包

36 - build/pro

Pigsty 专业版离线软件包构建环境配置(多版本)

build/pro 配置模板是 Pigsty 专业版离线软件包的构建环境配置,包含 PostgreSQL 13-18 全版本及额外商业组件。

此配置仅供开发者和贡献者使用。


配置概览

  • 配置名称: build/pro
  • 节点数量: 六节点(el9, el10, d12, d13, u22, u24)
  • 配置说明:Pigsty 专业版离线软件包构建环境(多版本)
  • 适用系统:el9, el10, d12, d13, u22, u24
  • 适用架构:x86_64

启用方式:

cp conf/build/pro.yml pigsty.yml

备注:这是一个固定 IP 地址的构建模板,仅供内部使用


配置内容

源文件地址:pigsty/conf/build/pro.yml

---
#==============================================================#
# File      :   pro.yml
# Desc      :   Pigsty 6-node pro building env (PG 13-18)
# Ctime     :   2024-10-22
# Mtime     :   2025-12-15
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

all:
  vars:
    version: v4.0.0
    admin_ip: 10.10.10.24
    region: china
    etcd_clean: true
    proxy_env:
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.myqcloud.com,*.tsinghua.edu.cn,*.pigsty.cc"

    # building spec
    pg_version: 18
    cache_pkg_dir: 'dist/${version}/pro'
    repo_modules: infra,node,pgsql
    pg_extensions: []
    repo_packages: [
      node-bootstrap, infra-package, infra-addons, node-package1, node-package2, pgsql-utility, extra-modules,
      pg18-full,pg18-time,pg18-gis,pg18-rag,pg18-fts,pg18-olap,pg18-feat,pg18-lang,pg18-type,pg18-util,pg18-func,pg18-admin,pg18-stat,pg18-sec,pg18-fdw,pg18-sim,pg18-etl,
      pg17-full,pg17-time,pg17-gis,pg17-rag,pg17-fts,pg17-olap,pg17-feat,pg17-lang,pg17-type,pg17-util,pg17-func,pg17-admin,pg17-stat,pg17-sec,pg17-fdw,pg17-sim,pg17-etl,
      pg16-full,pg16-time,pg16-gis,pg16-rag,pg16-fts,pg16-olap,pg16-feat,pg16-lang,pg16-type,pg16-util,pg16-func,pg16-admin,pg16-stat,pg16-sec,pg16-fdw,pg16-sim,pg16-etl,
      pg15-full,pg15-time,pg15-gis,pg15-rag,pg15-fts,pg15-olap,pg15-feat,pg15-lang,pg15-type,pg15-util,pg15-func,pg15-admin,pg15-stat,pg15-sec,pg15-fdw,pg15-sim,pg15-etl,
      pg14-full,pg14-time,pg14-gis,pg14-rag,pg14-fts,pg14-olap,pg14-feat,pg14-lang,pg14-type,pg14-util,pg14-func,pg14-admin,pg14-stat,pg14-sec,pg14-fdw,pg14-sim,pg14-etl,
      pg13-full,pg13-time,pg13-gis,pg13-rag,pg13-fts,pg13-olap,pg13-feat,pg13-lang,pg13-type,pg13-util,pg13-func,pg13-admin,pg13-stat,pg13-sec,pg13-fdw,pg13-sim,pg13-etl,
      infra-extra, kafka, java-runtime, sealos, tigerbeetle, polardb, ivorysql
    ]

  children:
    #el8:  { hosts: { 10.10.10.8:  { pg_cluster: el8 ,pg_seq: 1  ,pg_role: primary }}}
    el9:  { hosts: { 10.10.10.9:  { pg_cluster: el9  ,pg_seq: 1 ,pg_role: primary }}}
    el10: { hosts: { 10.10.10.10: { pg_cluster: el10 ,pg_seq: 1 ,pg_role: primary }}}
    d12:  { hosts: { 10.10.10.12: { pg_cluster: d12  ,pg_seq: 1 ,pg_role: primary }}}
    d13:  { hosts: { 10.10.10.13: { pg_cluster: d13  ,pg_seq: 1 ,pg_role: primary }}}
    u22:  { hosts: { 10.10.10.22: { pg_cluster: u22  ,pg_seq: 1 ,pg_role: primary }}}
    u24:  { hosts: { 10.10.10.24: { pg_cluster: u24  ,pg_seq: 1 ,pg_role: primary }}}
    etcd: { hosts: { 10.10.10.24:  { etcd_seq: 1 }}, vars: { etcd_cluster: etcd    }}
    infra:
      hosts:
        #10.10.10.8:  { infra_seq: 9, admin_ip: 10.10.10.8  ,ansible_host: el8  } #, ansible_python_interpreter: /usr/bin/python3.12 }
        10.10.10.9:  { infra_seq: 1, admin_ip: 10.10.10.9  ,ansible_host: el9  }
        10.10.10.10: { infra_seq: 2, admin_ip: 10.10.10.10 ,ansible_host: el10 }
        10.10.10.12: { infra_seq: 3, admin_ip: 10.10.10.12 ,ansible_host: d12  }
        10.10.10.13: { infra_seq: 4, admin_ip: 10.10.10.13 ,ansible_host: d13  }
        10.10.10.22: { infra_seq: 5, admin_ip: 10.10.10.22 ,ansible_host: u22  }
        10.10.10.24: { infra_seq: 6, admin_ip: 10.10.10.24 ,ansible_host: u24  }
      vars: { node_conf: oltp }

...

配置解读

build/pro 模板是 Pigsty 专业版离线软件包的构建配置,比开源版包含更多内容。

与 OSS 版的区别

  • 包含 PostgreSQL 13-18 全部六个大版本
  • 包含额外商业/企业组件:Kafka、PolarDB、IvorySQL 等
  • 包含 Java 运行时和 Sealos 等工具
  • 输出目录为 dist/${version}/pro/

构建内容

  • PostgreSQL 13、14、15、16、17、18 全版本
  • 每个版本的全部分类扩展包
  • Kafka 消息队列
  • PolarDB 和 IvorySQL 内核
  • TigerBeetle 分布式数据库
  • Sealos 容器平台

适用场景

  • 企业客户需要多版本支持
  • 需要 Oracle/MySQL 兼容内核
  • 需要 Kafka 消息队列集成
  • 需要长期支持版本(LTS)

构建流程

# 1. 准备构建环境
cp conf/build/pro.yml pigsty.yml

# 2. 在各节点上下载软件包
./infra.yml -t repo_build

# 3. 打包离线安装包
make cache-pro