If you are SSH-ing into a server to read var/log/syslog, you are doing it wrong.
Logs should be centralized, searchable, and visualized. But setting up ElasticSearch (ELK) is heavy. It eats RAM for breakfast.
Enter the PLG Stack: Promtail, Loki, Grafana.
- Promtail: The agent. It reads logs and ships them.
- Loki: The storage. It’s like Prometheus, but for logs.
- Grafana: The UI. You know this one.
The Configuration
We need a docker-compose.yaml and a config for Loki and Promtail.
docker-compose.yaml
version: "3"
services:
loki:
image: grafana/loki:2.9.0
ports:
- "3100:3100"
command: -config.file=/etc/loki/local-config.yaml
volumes:
- ./loki-config.yaml:/etc/loki/local-config.yaml
promtail:
image: grafana/promtail:2.9.0
volumes:
- /var/log:/var/log
- ./promtail-config.yaml:/etc/promtail/config.yaml
command: -config.file=/etc/promtail/config.yaml
grafana:
image: grafana/grafana:latest
ports:
- "3000:3000"
environment:
- GF_SECURITY_ADMIN_PASSWORD=admin
depends_on:
- loki
loki-config.yaml
auth_enabled: false
server:
http_listen_port: 3100
ingester:
lifecycler:
address: 127.0.0.1
ring:
kvstore:
store: inmemory
replication_factor: 1
final_sleep: 0s
chunk_idle_period: 5m
chunk_retain_period: 30s
schema_config:
configs:
- from: 2020-10-24
store: boltdb
object_store: filesystem
schema: v11
index:
prefix: index_
period: 168h
storage_config:
boltdb:
directory: /tmp/loki/index
filesystem:
directory: /tmp/loki/chunks
limits_config:
enforce_metric_name: false
reject_old_samples: true
reject_old_samples_max_age: 168h
promtail-config.yaml
server:
http_listen_port: 9080
grpc_listen_port: 0
positions:
filename: /tmp/positions.yaml
clients:
- url: http://loki:3100/loki/api/v1/push
scrape_configs:
- job_name: system
static_configs:
- targets:
- localhost
labels:
job: varlogs
__path__: /var/log/*log
Usage
docker-compose up -d- Open Grafana (
http://localhost:3000). - Add Data Source -> Loki.
- URL:
http://loki:3100. - Go to Explore, select Loki, and query
{job="varlogs"}.
Boom. All your system logs in one place.