
Prometheus and Grafana remain the modern standard for infrastructure and application monitoring because they solve two different but complementary problems extremely well: Prometheus collects and stores time-series metrics, while Grafana turns those metrics into usable dashboards and alerts. Prometheus is built around a pull-based model, a powerful query language, and a rich ecosystem of exporters for Linux, databases, applications, and cloud-native services. Grafana, meanwhile, is the visualization layer that helps teams turn raw metrics into actionable insight, whether you are debugging a host issue, tracking application latency, or building an operations dashboard. Prometheus’s official exporter ecosystem includes the Node Exporter for Linux host metrics, and Grafana provides built-in Prometheus data source support plus dashboard import workflows. [Prometheus Installation] [Prometheus Exporters and Integrations] [Grafana Prometheus Data Source]
For CentOS and RHEL-like systems, this stack remains especially practical because it runs cleanly as native services, integrates with systemd, and can be hardened with standard Linux security controls. Even if your estate is moving toward containers or managed observability, Prometheus and Grafana are still foundational: they’re lightweight enough for a single server, yet flexible enough to serve as the core of a broader observability platform. Prometheus also has a documented release cadence and current stable downloads, which makes it straightforward to install a recent release instead of relying on old distro packages. [Prometheus Release Cycle] [Prometheus Download]

Before you install anything, make sure you are starting from a supported CentOS/RHEL-like base. In practice, that usually means a current RHEL-compatible distribution such as Rocky Linux, AlmaLinux, or a supported CentOS Stream release rather than an end-of-life CentOS version. That matters because Prometheus and Grafana both benefit from current system libraries, security patches, and active package support. On a fresh monitoring host, begin by updating packages, confirming the kernel and filesystem are healthy, and checking available CPU, memory, and disk space. Prometheus is efficient, but it still needs enough room for its local time-series database, especially if you plan to retain metrics for more than a few days. Grafana is relatively light on resources, but dashboards, plugins, and concurrent users still make RAM and I/O worth planning for. [Prometheus Installation] [Grafana RHEL/Fedora Install]
A practical baseline for a small lab is 2 vCPU, 4 GB RAM, and enough SSD storage for the retention window you choose. For a production monitoring node, increase CPU and disk IOPS first, then memory if you expect many scrape targets or longer retention. Also confirm that firewalld or your local firewall policy is available, because you will typically need to expose Prometheus on port 9090 and Grafana on port 3000, at least initially. If you intend to monitor the local host, you will also want to install and run Node Exporter on port 9100. Prometheus’s Node Exporter guide explicitly shows scraping localhost:9100 for host metrics. [Node Exporter Guide]
A good prep checklist looks like this:
sudo dnf update -y
sudo reboot
free -h
df -h /
uname -r
cat /etc/os-release
If the system is underprovisioned, fix that before you install the stack. Monitoring systems often fail not because of software bugs, but because the storage or memory plan was too optimistic.
Prometheus is best installed from the official release binaries rather than distro packages, because the upstream project publishes current stable versions directly. At the time of writing, the Prometheus download page lists version 3.10.0 as the latest release. The official installation docs also emphasize precompiled binaries as the normal installation path for official components. [Prometheus Download] [Prometheus Installation]
The general installation flow is straightforward:
Download the latest stable Linux build from the Prometheus releases page.
Create a dedicated prometheus user and group.
Place binaries under a clean system path such as /usr/local/bin.
Create configuration and data directories.
Install a systemd unit so the service starts at boot.
Example setup:
# Create user and directories
sudo useradd --no-create-home --shell /sbin/nologin prometheus
sudo mkdir -p /etc/prometheus /var/lib/prometheus
# Download and extract the latest release
cd /tmp
wget https://github.com/prometheus/prometheus/releases/download/v3.10.0/prometheus-3.10.0.linux-amd64.tar.gz
tar -xvf prometheus-3.10.0.linux-amd64.tar.gz
# Install binaries
sudo cp prometheus-3.10.0.linux-amd64/prometheus /usr/local/bin/
sudo cp prometheus-3.10.0.linux-amd64/promtool /usr/local/bin/
# Install console assets
sudo cp -r prometheus-3.10.0.linux-amd64/consoles /etc/prometheus/
sudo cp -r prometheus-3.10.0.linux-amd64/console_libraries /etc/prometheus/
# Set ownership
sudo chown -R prometheus:prometheus /etc/prometheus /var/lib/prometheus
Next, create a systemd unit file so Prometheus behaves like a proper service rather than a manually started process. Prometheus supports configuration reloading via SIGHUP or the /-/reload endpoint, which is useful for operational changes without downtime. [Getting Started with Prometheus] [Prometheus FAQ]
# /etc/systemd/system/prometheus.service
[Unit]
Description=Prometheus
Wants=network-online.target
After=network-online.target
[Service]
User=prometheus
Group=prometheus
Type=simple
ExecStart=/usr/local/bin/prometheus \
--config.file=/etc/prometheus/prometheus.yml \
--storage.tsdb.path=/var/lib/prometheus \
--web.console.templates=/etc/prometheus/consoles \
--web.console.libraries=/etc/prometheus/console_libraries
Restart=on-failure
[Install]
WantedBy=multi-user.target
Then enable and start it:
sudo systemctl daemon-reload
sudo systemctl enable --now prometheus
sudo systemctl status prometheus
This gives you a clean, repeatable installation with a dedicated service account and predictable file locations.
Prometheus configuration is YAML-based and centered on scrape jobs. The simplest useful setup is to monitor Prometheus itself on localhost:9090, your host metrics via Node Exporter on localhost:9100, and one or more application endpoints that expose Prometheus-format metrics. The official configuration docs explain that static targets are declared under scrape_configs using static_configs, and the Node Exporter guide shows the canonical localhost example. [Prometheus Configuration] [Node Exporter Guide]
A practical starter configuration:
global:
scrape_interval: 15s
scrape_configs:
- job_name: prometheus
static_configs:
- targets: ['localhost:9090']
- job_name: node
static_configs:
- targets: ['localhost:9100']
- job_name: myapp
metrics_path: /metrics
static_configs:
- targets: ['127.0.0.1:8080']
That third job is where application observability becomes valuable. If your application exposes a /metrics endpoint, Prometheus can ingest custom business and service metrics such as request rate, error counts, queue depth, or latency histograms. If your app does not expose metrics directly, the Prometheus exporter ecosystem provides standardized ways to bridge the gap. Prometheus documents exporters as the recommended pattern for third-party systems and Linux host metrics. [Prometheus Exporters and Integrations]

After editing prometheus.yml, test configuration before reload:
sudo promtool check config /etc/prometheus/prometheus.yml
sudo systemctl restart prometheus
If you want changes to apply without a full restart, reload the service after validation. Keeping your jobs organized by purpose—prometheus, node, apps, databases, blackbox—will make the monitoring stack much easier to scale later.
Grafana is easiest to install from the official RPM repository on CentOS and other RHEL-like distributions. Grafana’s documentation for RHEL/Fedora says you can install from the RPM repository, a standalone RPM, or a tarball. It also distinguishes between Grafana OSS and Grafana Enterprise, noting that Enterprise is the recommended and default edition, while OSS remains functionally identical for open-source use cases. [Grafana RHEL/Fedora Install]
For a repository-based installation:
sudo wget -q -O gpg.key https://rpm.grafana.com/gpg.key
sudo rpm --import gpg.key
cat <<'EOF' | sudo tee /etc/yum.repos.d/grafana.repo
[grafana]
name=grafana
baseurl=https://rpm.grafana.com
repo_gpgcheck=1
enabled=1
gpgcheck=1
gpgkey=https://rpm.grafana.com/gpg.key
sslverify=1
sslcacert=/etc/pki/tls/certs/ca-bundle.crt
EOF
# OSS edition
sudo dnf install -y grafana
# Or Enterprise edition
# sudo dnf install -y grafana-enterprise
Once installed, start and enable the service:
sudo systemctl daemon-reload
sudo systemctl enable --now grafana-server
sudo systemctl status grafana-server
Grafana’s RPM-based install is a good match for CentOS because it integrates with the host package manager and supports ongoing updates through the repo. That makes maintenance much easier than manually replacing binaries every release. [Grafana RHEL/Fedora Install]
Once Grafana is running, the next step is to add Prometheus as a data source. Grafana’s docs state that Prometheus is supported natively as a built-in data source, and the standard workflow is to go to Connections or Data Sources, choose Prometheus, and enter the server URL. If Prometheus is local, the usual URL is http://localhost:9090. [Grafana Add Prometheus Data Source] [Grafana Prometheus Configure]
In the Grafana UI:
Log in as an administrator.
Go to Connections or Data sources.
Add Prometheus.
Set the URL to your Prometheus endpoint.
Save and test.
If Grafana and Prometheus are on separate hosts or in containers, remember that localhost only refers to the local network namespace. Use the actual host IP, DNS name, or container service name instead. Grafana’s configuration guidance explicitly warns about this distinction in containerized deployments. [Grafana Prometheus Configure]
After the data source is working, import starter dashboards. Grafana supports dashboard import workflows, and the Grafana ecosystem includes many ready-made dashboards for Node Exporter, Linux host metrics, and application frameworks. A good starting point is a node/system dashboard and a Prometheus self-monitoring dashboard. Once those are in place, build application-specific dashboards around latency, traffic, saturation, and errors.

A modern monitoring setup should be secure by default, not just functional. Start with firewall policy: only expose the ports you need, and only to the networks that should access them. Prometheus commonly listens on 9090, Grafana on 3000, and Node Exporter on 9100, but Node Exporter is often best kept internal and scraped only from trusted monitoring hosts. The Prometheus Node Exporter guide and security audit material both reinforce that host exporters should not be treated as internet-facing services. [Node Exporter Guide] [Node Exporter Security Audit PDF]
Use least privilege for all services:
run Prometheus as its own non-login system user
run Grafana under the packaged service account
restrict file permissions on /etc/prometheus, /var/lib/prometheus, and Grafana config files
avoid storing secrets in plain text where possible
Where authentication is required, prefer secure transport and access controls in front of the services. Grafana supports data source authentication options, and Prometheus can be placed behind a reverse proxy that provides TLS and basic auth if needed. For secrets management, use environment files, systemd drop-ins, or a dedicated vault mechanism instead of hardcoding credentials in YAML or shell history. Prometheus’s configuration is flexible, but that flexibility should not become secret sprawl. [Prometheus Configuration] [Grafana Data Sources]
In practical terms, a hardened setup often includes:
TLS termination at Nginx, Apache, or a load balancer
IP allowlists for Grafana admin access
private-only access to Prometheus and exporters
service account tokens or credentials stored outside repo-managed files
regular package updates and config reviews
That combination keeps the stack useful without turning your observability server into a high-risk exposure point.
Prometheus and Grafana are most effective when they are part of a broader observability architecture rather than a standalone dashboard pair. In that model, Prometheus handles metrics collection and alerting, exporters bridge unsupported systems into Prometheus format, Grafana visualizes the metrics, and alerting routes actionable incidents to the right team. Prometheus explicitly documents exporters as the mechanism for third-party and host metrics, and Node Exporter is the standard for Linux system statistics. [Prometheus Exporters and Integrations] [Node Exporter Guide]
A healthy architecture typically looks like this:
Applications expose /metrics or are instrumented with a client library.
Exporters convert host, database, or infrastructure signals into metrics.
Prometheus scrapes and stores the metrics on a retention schedule.
Grafana reads from Prometheus for dashboards and exploration.
Alerting sends notifications when thresholds or SLOs are violated.
As environments grow, people often add complementary tooling:
Loki for logs
Tempo for traces
Alertmanager for routing and grouping alerts
In other words, Prometheus is usually the metrics pillar, Grafana is the visualization and user interface layer, and Loki/Tempo fill in the logs-and-traces side of the observability triangle. That division keeps each system focused and easier to operate. If you are planning for scale, think about shard boundaries, scrape cardinality, retention cost, and which dashboards genuinely need long-term historical data versus just recent operational context.
After installation, validate each layer separately. First, confirm the services are active:
sudo systemctl status prometheus
sudo systemctl status grafana-server
Then verify Prometheus itself is scraping targets. The built-in web UI and the Targets page should show UP for the jobs you configured. If a target is down, the problem is usually one of these: wrong port, exporter not running, firewall blocking access, YAML syntax error, or a permission issue preventing the service from reading its files. Prometheus’s docs also recommend using promtool and service reloads to verify configuration changes safely. [Prometheus Configuration] [Prometheus FAQ]
Useful checks:
# Validate Prometheus config
sudo promtool check config /etc/prometheus/prometheus.yml
# Check recent logs
sudo journalctl -u prometheus -n 100 --no-pager
sudo journalctl -u grafana-server -n 100 --no-pager
# Test local endpoints
curl http://localhost:9090/-/ready
curl http://localhost:3000/api/health
curl http://localhost:9100/metrics
For PromQL validation, open the Prometheus expression browser and run a basic query like:
up
or:
node_memory_MemAvailable_bytes
If Grafana cannot connect to Prometheus, double-check the data source URL and remember that containerized localhost behaves differently than host localhost. Grafana’s documentation calls this out directly. [Grafana Prometheus Configure]
Common fixes include:
opening firewall ports only where appropriate
correcting SELinux or file ownership problems
ensuring the prometheus user can read the config and write to its data directory
confirming that node_exporter is installed and bound to the expected port
re-running systemctl daemon-reload after service file edits
Once the stack is working, turn it into an operational service rather than a one-time install. Start by automating updates, because Prometheus and Grafana both ship new releases regularly. Prometheus’s release pages document current stable versions, and Grafana’s RPM repository supports package-manager-based updates. [Prometheus Download] [Grafana RHEL/Fedora Install]
Then focus on backups and retention. Back up:
/etc/prometheus/prometheus.yml
any alerting or rule files
Grafana provisioning and dashboard definitions
custom scripts or exporters
Prometheus retention should be tuned to your storage budget and query patterns. Short retention may be fine for local troubleshooting, but longer retention helps with trend analysis and post-incident review. If you expect growth, plan for scaling before the disk fills up. A single Prometheus instance is excellent for a small-to-medium environment, but larger fleets may need sharding, remote write, long-term storage, or a higher-level metrics platform.
A sensible operational roadmap includes:
scheduled upgrades and patch windows
config management via Ansible, Puppet, or similar tooling
documented dashboard ownership
alert tuning to reduce noise
capacity planning for scrape volume and storage
high availability for critical monitoring paths
Prometheus has a documented long-term support path for stable releases, and its ecosystem is designed to evolve without forcing a rewrite of your monitoring model. [Prometheus LTS]
Installing Prometheus and Grafana on CentOS gives you a modern, flexible monitoring foundation that works well for both small servers and larger production environments. Prometheus collects host and application metrics with a proven pull-based model, exporters extend coverage to systems that do not speak Prometheus natively, and Grafana turns those metrics into dashboards your team can actually use. With proper systemd service definitions, secure repository-based installs, tight firewalling, and validated scrape jobs, the stack is both practical and production-ready. [Prometheus Installation] [Prometheus Exporters and Integrations] [Grafana RHEL/Fedora Install]
The key takeaway is that monitoring is not just about collecting data; it is about making operational signals reliable, secure, and actionable. Start with localhost and Node Exporter, add application metrics next, and then grow into alerting, logs, and traces as your environment matures. That incremental path keeps the setup simple at first while preserving a clear route to enterprise-grade observability.