Architecture
Architecture and entities in the infra module
A standard Pigsty deployment comes with an INFRA
module that provides the following services:
These are essential for a production-grade PostgreSQL service, and will be installed by default.
Component | Port | Domain | Description |
---|---|---|---|
nginx | 80 | h.pigsty | Web Service Portal (YUM/APT Repo) |
alertmanager | 9093 | a.pigsty | Alert Aggregation and delivery |
prometheus | 9090 | p.pigsty | Monitoring Time Series Database |
grafana | 3000 | g.pigsty | Visualization Platform |
lok | 3100 | - | Logging Collection Server |
pushgateway | 9091 | - | Collect One-Time Job Metrics |
blackbox_exporter | 9115 | - | ICMP, TCP, HTTP Probing |
dnsmasq | 53 | - | DNS Server, optional |
chronyd | 123 | - | NTP Time Server, optional |
HA PG can be deployed without INFRA
If you don't want these, the Minimal Install mode deploy HA Postgres without the Infra module.
- Nginx: Acts as a web server for local repo and a reverse proxy for other web UI services
- Grafana : Visualization platform for metrics, dashboards, and data analytics
- Loki: Centralized log aggregation and querying via Grafana
- Prometheus: Time-series monitoring database for metrics collection, storage, and alert evaluation
- AlertManager: Alert aggregation, notification dispatch, and silencing
- PushGateway: Collects metrics from one-off and batch jobs
- BlackboxExporter: Probes node IP and VIP reachability
- DNSMASQ: DNS resolution for Pigsty’s internal domains
- Chronyd: NTP time synchronization to keep all nodes in sync
The INFRA module isn’t mandatory for HA PostgreSQL - for instance, it’s omitted in Slim Install mode.
However, since INFRA provides essential supporting services for production-grade HA PostgreSQL clusters, it’s strongly recommended for most deployments.
If you already have your own infrastructure (Nginx, local repos, monitoring, DNS, NTP), you can disable INFRA and configure Pigsty to use your existing stack instead.
Nginx
Nginx is the gateway for all WebUI services in Pigsty, serving on HTTP (80) / HTTPS (443) by default.
It exposes web UIs like Grafana, Prometheus, AlertManager, and HAProxy console, while also serving static resources like local yum/apt repos.
Nginx configuration follows infra_portal
definitions, for example:
infra_portal:
home : { domain: h.pigsty }
grafana : { domain: g.pigsty ,endpoint: "${admin_ip}:3000" ,websocket: true }
prometheus : { domain: p.pigsty ,endpoint: "${admin_ip}:9090" }
alertmanager : { domain: a.pigsty ,endpoint: "${admin_ip}:9093" }
blackbox : { endpoint: "${admin_ip}:9115" }
loki : { endpoint: "${admin_ip}:3100" }
#minio : { domain: sss.pigsty ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }
These endpoint
definitions are referenced by other services - logs go to loki
endpoint, Grafana datasources register to grafana
endpoint, alerts route to alertmanager
endpoint.
Pigsty allows rich Nginx customization as a local file server or reverse proxy, with self-signed or real HTTPS certs.
For more details, check these tutorials:
- Tutorial: DNS Configuration
- Tutorial: Nginx Service Exposure
- Tutorial: Certbot HTTPS Certificate Management
Local Repo
During installation, Pigsty creates a local software repo on the INFRA node to speed up later software installations.
Located at /www/pigsty
and served by Nginx, it’s accessible via http://h.pigsty/pigsty
.
Pigsty’s offline package is a tarball of a pre-built repo directory. If /www/pigsty
exists with a /www/pigsty/repo_complete
marker, Pigsty skips downloading from upstream - perfect for air-gapped environments!
Repo definition lives in /www/pigsty.repo
, fetchable via http://${admin_ip}/pigsty.repo
:
curl -L http://h.pigsty/pigsty.repo -o /etc/yum.repos.d/pigsty.repo
You can also use the file repo directly without Nginx:
[pigsty-local]
name=Pigsty local $releasever - $basearch
baseurl=file:///www/pigsty/
enabled=1
gpgcheck=0
Local repo configs are in: Config: INFRA - REPO
Prometheus
Prometheus, our TSDB for monitoring, listens on port 9090 (access via IP:9090
or http://p.pigsty
).
Key features:
- Service discovery via local static files with identity info
- Metric scraping, pre-processing, and TSDB storage
- Alert rule evaluation and forwarding to AlertManager
AlertManager handles alerts on port 9093 (IP:9093
or http://a.pigsty
). While it receives Prometheus alerts, you’ll need extra config (e.g., SMTP) for notifications.
Configs for Prometheus, AlertManager, PushGateway, and BlackboxExporter are in: Config: INFRA - PROMETHEUS
Grafana
Grafana, our visualization powerhouse, runs on port 3000 (IP:3000
or http://g.pigsty
).
Pigsty’s monitoring is dashboard-based with URL-driven navigation. Drill down or up quickly to pinpoint issues.
Fun fact: We’ve supercharged Grafana with extra viz plugins like ECharts - it’s not just monitoring, it’s a low-code data app platform!
Loki handles logs on port 3100, with Promtail shipping logs from nodes to the mothership.
Configs live in: Config: INFRA - GRAFANA and Config: INFRA - Loki
Ansible
Ansible is already installed on the Admin Node during bootstrap.
So the ansible
on infra nodes are not actually used.
But it gives you a viable backup option in case of your admin node is nuked.
DNSMASQ
DNSMASQ handles DNS resolution, with other modules registering their domains to INFRA’s DNSMASQ service.
DNS records live in /etc/hosts.d/
on all INFRA nodes.
Configs: Config: INFRA - DNS
Pigsty setup it just for your convenience, and pigsty is actually NOT using it internally (static DNS records in /etc/hosts
are used instead).
Chronyd
Chronyd helps keep all nodes in sync!
NTP configs: Config: NODES - NTP
This is purely optional, if you already have your own NTP servers configured, just leave it alone.