demo/minio
四节点 x 四盘位的高可用多节点多盘 MinIO 集群演示
demo/minio 配置模板演示了如何部署一套四节点 x 四盘位、总计十六盘的高可用 MinIO 集群,提供 S3 兼容的对象存储服务。
更多教程,请参考 MINIO 模块文档。
配置概览
- 配置名称:
demo/minio - 节点数量: 四节点
- 配置说明:高可用多节点多盘 MinIO 集群演示
- 适用系统:
el8,el9,el10,d12,d13,u22,u24 - 适用架构:
x86_64,aarch64 - 相关配置:
meta
启用方式:
./configure -c demo/minio
备注:这是一个四节点模版,您需要在生成配置后修改其他三个节点的 IP 地址
配置内容
源文件地址:pigsty/conf/demo/minio.yml
---
#==============================================================#
# File : minio.yml
# Desc : pigsty: 4 node x 4 disk MNMD minio clusters
# Ctime : 2023-01-07
# Mtime : 2025-12-12
# Docs : https://doc.pgsty.com/config
# License : Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright : 2018-2026 Ruohang Feng / Vonng ([email protected])
#==============================================================#
# One pass installation with:
# ./deploy.yml
#==============================================================#
# 1. minio-1 @ 10.10.10.10:9000 - - (9002) svc <-x 10.10.10.9:9002
# 2. minio-2 @ 10.10.10.11:9000 -xx- (9002) svc <-x <----------------
# 3. minio-3 @ 10.10.10.12:9000 -xx- (9002) svc <-x sss.pigsty:9002
# 4. minio-4 @ 10.10.10.12:9000 - - (9002) svc <-x (intranet dns)
#==============================================================#
# use minio load balancer service (9002) instead of direct access (9000)
# mcli alias set sss https://sss.pigsty:9002 minioadmin S3User.MinIO
#==============================================================#
# https://min.io/docs/minio/linux/operations/install-deploy-manage/deploy-minio-multi-node-multi-drive.html
# MINIO_VOLUMES="https://minio-{1...4}.pigsty:9000/data{1...4}/minio"
all:
children:
# infra cluster for proxy, monitor, alert, etc..
infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }
# minio cluster with 4 nodes and 4 drivers per node
minio:
hosts:
10.10.10.10: { minio_seq: 1 , nodename: minio-1 }
10.10.10.11: { minio_seq: 2 , nodename: minio-2 }
10.10.10.12: { minio_seq: 3 , nodename: minio-3 }
10.10.10.13: { minio_seq: 4 , nodename: minio-4 }
vars:
minio_cluster: minio
minio_data: '/data{1...4}'
minio_buckets: # list of minio bucket to be created
- { name: pgsql }
- { name: meta ,versioning: true }
- { name: data }
minio_users: # list of minio user to be created
- { access_key: pgbackrest ,secret_key: S3User.Backup ,policy: pgsql }
- { access_key: s3user_meta ,secret_key: S3User.Meta ,policy: meta }
- { access_key: s3user_data ,secret_key: S3User.Data ,policy: data }
# bind a node l2 vip (10.10.10.9) to minio cluster (optional)
node_cluster: minio
vip_enabled: true
vip_vrid: 128
vip_address: 10.10.10.9
vip_interface: eth1
# expose minio service with haproxy on all nodes
haproxy_services:
- name: minio # [REQUIRED] service name, unique
port: 9002 # [REQUIRED] service port, unique
balance: leastconn # [OPTIONAL] load balancer algorithm
options: # [OPTIONAL] minio health check
- option httpchk
- option http-keep-alive
- http-check send meth OPTIONS uri /minio/health/live
- http-check expect status 200
servers:
- { name: minio-1 ,ip: 10.10.10.10 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
- { name: minio-2 ,ip: 10.10.10.11 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
- { name: minio-3 ,ip: 10.10.10.12 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
- { name: minio-4 ,ip: 10.10.10.13 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
vars:
version: v4.0.0 # pigsty version string
admin_ip: 10.10.10.10 # admin node ip address
region: default # upstream mirror region: default|china|europe
infra_portal: # infra services exposed via portal
home : { domain: i.pigsty } # default domain name
# domain names to access minio web console via nginx web portal (optional)
minio : { domain: m.pigsty ,endpoint: "10.10.10.10:9001" ,scheme: https ,websocket: true }
minio10 : { domain: m10.pigsty ,endpoint: "10.10.10.10:9001" ,scheme: https ,websocket: true }
minio11 : { domain: m11.pigsty ,endpoint: "10.10.10.11:9001" ,scheme: https ,websocket: true }
minio12 : { domain: m12.pigsty ,endpoint: "10.10.10.12:9001" ,scheme: https ,websocket: true }
minio13 : { domain: m13.pigsty ,endpoint: "10.10.10.13:9001" ,scheme: https ,websocket: true }
minio_endpoint: https://sss.pigsty:9002 # explicit overwrite minio endpoint with haproxy port
node_etc_hosts: ["10.10.10.9 sss.pigsty"] # domain name to access minio from all nodes (required)
#----------------------------------------------#
# PASSWORD : https://doc.pgsty.com/config/security
#----------------------------------------------#
grafana_admin_password: pigsty
haproxy_admin_password: pigsty
minio_secret_key: S3User.MinIO
...配置解读
demo/minio 模板是 MinIO 生产级部署的参考配置,展示了多节点多盘 (MNMD) 架构。
关键特性:
- 多节点多盘架构:4 节点 × 4 盘 = 16 盘纠删码组
- L2 VIP 高可用:通过 Keepalived 绑定虚拟 IP
- HAProxy 负载均衡:9002 端口统一访问入口
- 细粒度权限:为不同应用创建独立用户和存储桶
访问方式:
# 使用 mcli 配置 MinIO 别名(通过 HAProxy 负载均衡)
mcli alias set sss https://sss.pigsty:9002 minioadmin S3User.MinIO
# 列出存储桶
mcli ls sss/
# 使用控制台
# 访问 https://m.pigsty 或 https://m10-m13.pigsty
适用场景:
- 需要 S3 兼容对象存储的环境
- PostgreSQL 备份存储(pgBackRest 远程仓库)
- 大数据和 AI 工作负载的数据湖
- 需要高可用对象存储的生产环境
注意事项:
- 每个节点需要准备 4 块独立磁盘挂载到
/data1-/data4 - 生产环境建议至少 4 节点以实现纠删码冗余
- VIP 需要正确配置网络接口(
vip_interface)