Centralized Logging¶
This guide outlines the process of extracting Loki platform logs from a Kubernetes cluster and analyzing them locally using a temporary Loki and Grafana setup. This method is useful for debugging, historical data analysis, and developing custom dashboards without impacting your production environment.
Prerequisites:
kubectl
configured to access your Kubernetes cluster.docker
installed on your local machine.
Estimated Time: 15-30 minutes, depending on the size of your Loki data.
Step 1: Stop the Loki StatefulSet¶
To ensure a consistent and complete snapshot of your Loki database, you must stop the Loki StatefulSet. This allows Loki to gracefully shut down and flush all pending data to its persistent storage.
Step 2: Temporarily Mount a Pod to the Loki PVC¶
You cannot directly access the contents of a Kubernetes Persistent Volume Claim (PVC) from your local machine. Create a temporary busybox
pod to act as an intermediary, providing a mounting point to generate and extract a tarball of the Loki data.
Create a file named pvc-extract-pod.yaml
with the following content:
apiVersion: v1
kind: Pod
metadata:
name: pvc-extract
namespace: cattle-system
spec:
restartPolicy: Never
containers:
- name: shell
image: busybox
command: ["sh", "-c", "sleep 3600"]
volumeMounts:
- name: loki-data
mountPath: /mnt/loki
volumes:
- name: loki-data
persistentVolumeClaim:
claimName: storage-mvtco-loki-0 # Ensure this matches your Loki PVC name
Apply the pod definition:
Wait for the pod to be running:
Step 3: Generate a Tarball of the Loki Database¶
Execute the following command inside the pvc-extract
pod to create a compressed tar archive of the Loki database.
kubectl -n cattle-system exec pvc-extract -- sh -c "cd /mnt/loki && tar czf /tmp/loki-data.tar.gz ."
Step 4: Extract the Tarball to Your Local Machine¶
Copy the generated tarball from the pvc-extract
pod to your local machine:
Step 5: Set Up a Local Directory and Permissions¶
Create a local directory to extract the Loki data and adjust permissions to ensure your local Loki instance can read and write to these files.
Step 6: Create local-config.yaml
for Local Loki¶
Create a file named local-config.yaml
with the following content. This configuration mimics your cluster's Loki deployment but is optimized for a single-node local setup.
auth_enabled: false
server:
http_listen_port: 3100
log_level: info
common:
ring:
kvstore:
store: inmemory
ingester:
wal:
enabled: true
dir: /var/loki/loki/wal
checkpoint_duration: 5m
lifecycler:
final_sleep: 0s
ring:
kvstore:
store: inmemory
replication_factor: 1
chunk_idle_period: 5m
chunk_retain_period: 30s
storage_config:
boltdb_shipper:
active_index_directory: /var/loki/loki/boltdb-shipper-active
cache_location: /var/loki/loki/boltdb-shipper-cache
shared_store: filesystem
filesystem:
directory: /var/loki/chunks
compactor:
working_directory: /var/loki/loki/boltdb-shipper-compactor
shared_store: filesystem
schema_config:
configs:
- from: 2020-01-01
store: boltdb-shipper
object_store: filesystem
schema: v11
index:
prefix: index_
period: 24h
Step 7: Run Loki Locally as a Docker Container¶
Run a local Loki instance using Docker, mounting the extracted data and your local-config.yaml
.
First, set an environment variable for the data directory. Ensure you replace /home/ubuntu/tmp/loki-db-local
with the actual path to your loki-db-local
directory.
Now, run the Loki container:
docker run -d \
--name loki-local \
-p 3100:3100 \
-v "$LOKI_DATA_DIR":/loki \
-v "$(pwd)/local-config.yaml":/etc/loki/local-config.yaml \
grafana/loki:2.9.8 \ # Use the Loki version that matches your cluster if possible
-config.file=/etc/loki/local-config.yaml
Step 8: Run a Local Grafana Instance¶
Run a local Grafana instance using Docker to serve as a frontend for exploring your Loki data.
Step 9: Configure Grafana to Connect to Local Loki¶
-
Access Grafana: Open your web browser and navigate to
http://localhost:3000
. If you are running this on a cloud instance, use the public DNS/IP of your machine followed by port 3000. -
Login: The default username/password is
admin
/admin
. -
Add Data Source:
-
Navigate to Connections (plug icon on the left) > Data sources.
- Click Add new data source.
-
Select Loki.
-
Configure Loki Data Source:
-
Name:
Loki Local
(or any descriptive name). -
HTTP > URL:
Enter the URL for your local Loki instance.
- If running entirely on your local machine:
http://localhost:3100
- If running on a cloud provider, use the public DNS/IP of the machine where Loki is running, e.g.,
http://your-public-dns-or-ip:3100
- If running entirely on your local machine:
-
Auth: Ensure
Basic auth
is unchecked. -
Save & Test: Click Save & Test. You should see a green message "Data source successfully connected."
Step 10: Explore Your Loki Data¶
- Navigate to Explore (compass icon on the left).
- From the Data source dropdown, select Loki Local.
- You can now use LogQL queries to explore your extracted Loki database as it was when you pulled the data. Adjust the lookback time as needed to view your historical logs.
Cleanup (Optional):
After you are finished with your local debugging, you can clean up the temporary resources:
# Stop and remove local Docker containers
docker stop grafana-local loki-local
docker rm grafana-local loki-local
# Remove the local Loki data directory
rm -rf loki-db-local
# Delete the temporary pod in Kubernetes
kubectl -n cattle-system delete pod pvc-extract
# Scale your Loki StatefulSet back up in Kubernetes
kubectl -n cattle-system scale statefulset mvtco-loki --replicas=1 # Adjust to your desired replica count