Self-Hosting Grafana Loki with Docker Compose
What Is Grafana Loki?
Loki is a log aggregation system from Grafana Labs that stores and queries logs without full-text indexing. Unlike Elasticsearch, which indexes every word in every log line, Loki only indexes log metadata (labels) and stores compressed log chunks. This makes it dramatically lighter on resources — perfect for self-hosters who want centralized logging without dedicating 16 GB of RAM to Elasticsearch.
Updated March 2026: Verified with latest Docker images and configurations.
The typical stack is Loki (storage + queries) + Alloy (log collection agent) + Grafana (visualization). Think of it as Prometheus, but for logs.
Official site: grafana.com/oss/loki
Prerequisites
- A Linux server (Ubuntu 22.04+ recommended)
- Docker and Docker Compose installed (guide)
- 2 GB of free RAM (minimum for the full stack)
- 20 GB of free disk space (grows with log retention)
- A domain name (optional, for remote access)
Docker Compose Configuration
This deploys the complete stack: Loki for storage, Alloy for log collection, and Grafana for visualization.
Create a project directory:
mkdir -p ~/loki-stack && cd ~/loki-stack
Create docker-compose.yml:
services:
loki:
image: grafana/loki:3.6.7
container_name: loki
ports:
- "3100:3100"
volumes:
- ./loki-config.yaml:/etc/loki/local-config.yaml:ro
- loki-storage:/loki
command: -config.file=/etc/loki/local-config.yaml
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:3100/ready"]
interval: 30s
timeout: 10s
retries: 3
restart: unless-stopped
alloy:
image: grafana/alloy:v1.13.2
container_name: alloy
volumes:
- ./alloy-config.alloy:/etc/alloy/config.alloy:ro
- /var/log:/var/log:ro # host logs
- /var/run/docker.sock:/var/run/docker.sock:ro # Docker container logs
command: run /etc/alloy/config.alloy
depends_on:
loki:
condition: service_healthy
restart: unless-stopped
grafana:
image: grafana/grafana:12.4.1
container_name: grafana
ports:
- "3000:3000"
volumes:
- grafana-data:/var/lib/grafana
environment:
GF_SECURITY_ADMIN_USER: admin # CHANGE THIS
GF_SECURITY_ADMIN_PASSWORD: changeme # CHANGE THIS
depends_on:
loki:
condition: service_healthy
restart: unless-stopped
volumes:
loki-storage:
grafana-data:
Create loki-config.yaml:
auth_enabled: false
server:
http_listen_port: 3100
common:
path_prefix: /loki
storage:
filesystem:
chunks_directory: /loki/chunks
rules_directory: /loki/rules
replication_factor: 1
ring:
kvstore:
store: inmemory
schema_config:
configs:
- from: 2024-01-01
store: tsdb
object_store: filesystem
schema: v13
index:
prefix: index_
period: 24h
limits_config:
retention_period: 168h # 7 days — adjust to your needs
max_query_lookback: 168h
compactor:
working_directory: /loki/compactor
compaction_interval: 10m
retention_enabled: true
retention_delete_delay: 2h
retention_delete_worker_count: 150
Create alloy-config.alloy:
// Collect Docker container logs
loki.source.docker "containers" {
host = "unix:///var/run/docker.sock"
targets = discovery.docker.containers.targets
forward_to = [loki.write.local.receiver]
}
discovery.docker "containers" {
host = "unix:///var/run/docker.sock"
}
// Collect host syslog
loki.source.file "syslog" {
targets = [
{__path__ = "/var/log/syslog", job = "syslog"},
{__path__ = "/var/log/auth.log", job = "authlog"},
]
forward_to = [loki.write.local.receiver]
}
// Send everything to Loki
loki.write "local" {
endpoint {
url = "http://loki:3100/loki/api/v1/push"
}
}
Start the stack:
docker compose up -d
Initial Setup
- Open Grafana at
http://your-server-ip:3000 - Log in with the credentials from your environment variables
- Navigate to Connections → Data Sources → Add data source
- Select Loki
- Set the URL to
http://loki:3100 - Click Save & Test — you should see “Data source successfully connected”
- Go to Explore and select the Loki data source to start querying logs
Querying with LogQL
LogQL is Loki’s query language — it works like PromQL but for logs.
| Query | What It Does |
|---|---|
{job="syslog"} | All syslog entries |
{container="nginx"} |= "error" | Nginx container logs containing “error” |
{job="authlog"} |~ "Failed password" | Failed SSH login attempts |
rate({container="nginx"}[5m]) | Log volume per second over 5 minutes |
{job="syslog"} | json | level="error" | Parse JSON logs, filter by level |
Configuration
Retention Period
Adjust retention_period in loki-config.yaml to control how long logs are kept:
limits_config:
retention_period: 720h # 30 days
After changing the config, restart Loki:
docker compose restart loki
Alerting
Loki supports alerting rules that fire when log patterns match. Create a rules file and mount it into the Loki container. For most self-hosters, setting up alerts in Grafana is simpler — use the Grafana alerting UI with Loki as a data source.
Why Loki Over Elasticsearch?
| Aspect | Loki | Elasticsearch |
|---|---|---|
| Indexing | Labels only (metadata) | Full-text (every word) |
| RAM usage | 512 MB – 2 GB | 8 – 16 GB minimum |
| Disk usage | Compressed chunks | Large Lucene indices |
| Query language | LogQL (Prometheus-like) | Query DSL (complex) |
| Learning curve | Low if you know Prometheus | Steep |
| Setup complexity | 3 containers | 5+ containers (ELK stack) |
| Best for | Self-hosters, small–medium logs | Enterprise full-text search |
Loki trades query power for efficiency. You can’t do arbitrary full-text search across all fields like Elasticsearch. Instead, you label your logs and filter by those labels, then grep within matching streams. For 95% of self-hosting log analysis, this is more than enough.
Backup
The critical data is in the loki-storage volume:
docker compose stop loki
docker run --rm -v loki-storage:/data -v $(pwd):/backup alpine tar czf /backup/loki-backup.tar.gz -C /data .
docker compose up -d loki
Grafana dashboards and data source configs are in grafana-data. Back up both volumes.
See the Backup Strategy guide for automated approaches.
Troubleshooting
Loki shows “not ready” in healthcheck
Symptom: Grafana can’t connect to Loki. Container logs show readiness probe failures.
Fix: Loki needs time to initialize TSDB indices on first start. Wait 30–60 seconds. Check logs:
docker compose logs loki
No logs appearing in Grafana
Symptom: Data source connects but queries return nothing.
Fix: Check that Alloy is running and forwarding logs:
docker compose logs alloy
Verify the Docker socket is mounted (for container log collection) and /var/log permissions allow read access.
Disk usage growing fast
Symptom: Loki storage volume consuming unexpected disk space.
Fix: Ensure retention_enabled: true and retention_period is set in your config. The compactor needs to run — check that compaction_interval is configured.
Resource Requirements
| Component | RAM (idle) | RAM (load) | CPU |
|---|---|---|---|
| Loki | 300–500 MB | 1–2 GB | 0.5–1 core |
| Alloy | 50–128 MB | 128–256 MB | 0.1–0.5 core |
| Grafana | 200–400 MB | 500 MB–1 GB | 0.5–1 core |
| Total | ~700 MB | ~2–3 GB | 1–2 cores |
For a homelab monitoring 10–20 containers, the idle footprint is well under 1 GB. Loki scales to millions of log lines per day on modest hardware.
Verdict
Loki is the best self-hosted logging solution for most people. It gives you centralized log aggregation, powerful queries, and Grafana integration at a fraction of the resource cost of Elasticsearch. If you’re already running Grafana and Prometheus for metrics, adding Loki for logs is the natural next step. If you need full-text search across massive log volumes and have 16+ GB of RAM to spare, Graylog or Elasticsearch are still the right tools — but most self-hosters don’t.
FAQ
Can Loki replace Elasticsearch/ELK for log management?
For most self-hosting use cases, yes. Loki handles centralized log aggregation at a fraction of the resource cost (512 MB vs 8-16 GB RAM). The trade-off is Loki doesn’t do arbitrary full-text search — it indexes labels (metadata) and filters within streams. For 95% of log analysis, this is sufficient. Use Elasticsearch only if you need full-text search across millions of structured fields.
What replaced Promtail for log collection?
Grafana Alloy replaced Promtail as the recommended log collection agent. Promtail still works but is in maintenance mode. Alloy uses a configuration language (not YAML) and supports metrics, logs, and traces — making it a unified collection agent. Migrate by converting your Promtail YAML to Alloy syntax.
Can I use Loki without Grafana?
Yes. Loki exposes an HTTP API at port 3100 that you can query directly with curl or any HTTP client using LogQL. However, Grafana provides the visualization, dashboards, and alerting that make Loki practical. Running Loki without Grafana is like running Prometheus without a dashboard — technically possible but not useful.
How do I set up alerts on log patterns?
Two approaches: (1) Loki ruler — define alerting rules in YAML that fire when LogQL queries match. (2) Grafana alerting — create alert rules in the Grafana UI using Loki as a data source. Grafana alerting is simpler for most users and supports email, Slack, and webhook notifications.
How much disk space does Loki use?
Significantly less than Elasticsearch due to compressed chunk storage and label-only indexing. A typical homelab with 10-20 containers generating moderate logs uses 1-5 GB per week of retention. Set retention_period in the config to control disk growth — the compactor automatically deletes old data.
Can Loki collect logs from remote servers?
Yes. Install Alloy (or Promtail) on each remote server and configure it to push logs to your central Loki instance at http://loki-server:3100/loki/api/v1/push. Loki accepts logs from multiple sources simultaneously. Use labels to distinguish between servers.
Related
Get self-hosting tips in your inbox
Get the Docker Compose configs, hardware picks, and setup shortcuts we don't put in articles. Weekly. No spam.
Comments