You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/debug-logs.md
+26-1Lines changed: 26 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -46,7 +46,7 @@ In the following examples we will access containers of the `my-cluster-name-rs0-
46
46
47
47
## Changing logs representation
48
48
49
-
You can also change the representation of logs: either use structured representation, which produces a parsing-friendly JSON, or use traditional console-frienldy logging with specific level. Changing representation of logs is possible by editing the `deploy/operator.yml` file, which sets the following environment variables with self-speaking names and values:
49
+
You can also change the representation of logs: either use structured representation, which produces a parsing-friendly JSON, or use traditional console-friendly logging with specific level. Changing representation of logs is possible by editing the `deploy/operator.yml` file, which sets the following environment variables with self-speaking names and values:
50
50
51
51
```yaml
52
52
env:
@@ -57,3 +57,28 @@ env:
57
57
value: INFO
58
58
...
59
59
```
60
+
61
+
## Cluster-level logging
62
+
63
+
In a distributed Kubernetes environment, it's often difficult to debug issues because logs are tied to the lifecycle of individual Pods and containers. If a Pod fails and restarts, its logs are lost, making it hard to identify the root cause of an issue.
64
+
65
+
Percona Operator for MongoDB addresses this challenge with **cluster-level logging**, ensuring logs are stored persistently, independent of the Pods. This approach helps ensure that logs are available for review even after a Pod restarts.
66
+
67
+
The Operator collects logs using [Fluent Bit :octicons-link-external-16:](https://fluentbit.io/) - a lightweight log processor, which supports many output plugins and has broad forwarding capabilities. Fluent Bit runs as a sidecar container within each database Pod. It collects logs from the primary `mongod` container, adds metadata, and stores them in a single file in a dedicated log-specific Persistent Volume Claim (PVC) at `/data/db/logs/`. This allows logs to survive Pod restarts and be accessed for later debugging.
68
+
69
+
Logs are also streamed to standard output, making them accessible via the `kubectl logs` command for quick troubleshooting:
70
+
71
+
```{.bash data-prompt="$"}
72
+
$ kubectl logs my-cluster-name-rs0-0 -c logs
73
+
```
74
+
75
+
Currently, logs are collected only for the `mongod` instance. All other logs are ephemeral, meaning they will not persist after a Pod restart. Logs are stored for 7 days and are rotated afterwards.
76
+
77
+
### Configure log collector
78
+
79
+
Cluster-level logging is enabled by default and is controlled with the `logcollector.enabled` key in the `deploy/cr.yaml` Custom Resource manifest.
80
+
81
+
You can additionally configure Fluent Bit using the `logcollector.configuration` subsection in
82
+
the `deploy/cr.yaml` Custom Resource manifest. This allows you to define custom filters and output plugins to suit your specific logging and monitoring needs.
83
+
84
+
When you add a new configuration to the `logcollector.configuration` and this field was previously empty, it triggers a Smart Update. However, if the configuration was not empty, subsequent changes won't trigger an update automatically.
0 commit comments