You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
title: "How to Use Fluentd and Loki to Access Service Logs"
4
-
description: "Learn to set up Fluentd and Loki for open source data logging in Kubernetes environments. Then use Grafana for data aggregation and visualization."
4
+
description: "Learn to set up Fluentd and Loki for open source data logging, then use Grafana for data aggregation and visualization."
5
5
authors: ["Tom Henderson"]
6
6
contributors: ["Tom Henderson"]
7
7
published: 2024-06-19
8
8
keywords: ['fluentd and loki','fluentd','loki','k8s','open source data logging','service logs','grafana dashboard','data aggregation','data processing','data indexing','data storage']
[Fluentd](https://www.fluentd.org/) and [Loki](https://grafana.com/oss/loki/) are part of a flexible chain of service-logging apps. When combined with [Prometheus](https://prometheus.io/) and [Grafana](https://grafana.com/), they create a full stack for log presentation and querying. The Fluentd/Loki/Prometheus/Grafana stack is widely adopted and provides decision support using time series-based log data and streams from various log formats. This stack can scale when an instance or pod deployment configuration changes, and also works with Kubernetes components for cloud-native stack control through our Marketplace [Prometheus & Grafana deployment](https://www.linode.com/marketplace/apps/linode/prometheus-grafana/).
@@ -61,83 +63,23 @@ There are three deployment modes for Loki:
61
63
62
64
-**Time source:** Accurate timestamps within log data sources and consistency in changes made through log aggregation processes are critical for ensuring visualization accuracy later in the stack. All instances, whether log sources or log processors, must be synchronized to the same time source. Use a common NTP server for all instances in the stack to ensure synchronization with this time source and maintain system integrity.
63
65
64
-
-**Log and data sources:** Fluentd plays a crucial role in the logging stack by accumulating logs from various sources using plugins. In this example, log sources include the `/var/log` directories on separate Linux instances and a Kubernetes pod. The source of Fluentd logs is limited to the available source plugins provided by Fluentd or created by users. There are numerous input plugins available for various data sources.
66
+
-**Log and data sources:** Fluentd plays a crucial role in the logging stack by accumulating logs from various sources using plugins. For example, log sources could include the `/var/log` directories on separate Linux instances and a Kubernetes pod. The source of Fluentd logs is limited to the available source plugins provided by Fluentd or created by users. There are numerous input plugins available for various data sources.
65
67
66
68
The gathered Fluentd logs are organized into JSON-formatted entries by Loki. Prometheus stores these Loki logs, which are otherwise ephemeral. The Prometheus store acts as the data source for Grafana's visualization console. Grafana and Prometheus are typically deployed together. This example uses our Marketplace [Prometheus & Grafana installation](https://www.linode.com/marketplace/apps/linode/prometheus-grafana/).
67
69
68
70
-**Alternative software configurations:** Other configurations use Promtail, Loki, Prometheus, and Grafana either separately or in combination. For instance, Loki, Promtail, and Grafana work well in strictly Kubernetes-sourced log consoles, but have limited plugins for other data sources.
69
71
70
72
## Before You Begin
71
73
72
-
The example stack in this guide uses three groups of instances:
73
-
74
-
-**Group 1:** Consists of discrete Linux instances in a Kubernetes pod for monitoring.
75
-
76
-
-**Group 2:** The instance where Fluentd gathers the logs and sends them to a Loki instance within the same host.
77
-
78
-
-**Group 3:** Consists of an instance running Grafana and Prometheus, deployed to a Nanode using our Prometheus & Grafana Marketplace app.
79
-
80
-
1. If you do not already one deployed, create a Compute Instance with at least 4 GB of memory. See our [Getting Started with Linode](/docs/products/platform/get-started/) and [Creating a Compute Instance](/docs/products/compute/compute-instances/guides/create/) guides.
81
-
82
-
1. Follow our [Setting Up and Securing a Compute Instance](/docs/products/compute/compute-instances/guides/set-up-and-secure/) guide to update your system. You may also wish to set the timezone, configure your hostname, create a limited user account, and harden SSH access.
74
+
1. Follow the instructions in our [Deploy Prometheus and Grafana through the Linode Marketplace](/docs/products/tools/marketplace/guides/grafana/) guide. Choose the latest available version of Ubuntu. A Nanode 1 GB plan is suitable for this example stack.
83
75
84
76
{{< note >}}
85
77
This guide is written for a non-root user. Commands that require elevated privileges are prefixed with `sudo`. If you’re not familiar with the `sudo` command, see the [Users and Groups](/docs/guides/linux-users-and-groups/) guide.
86
78
{{< /note >}}
87
79
88
-
## Prometheus-Grafana Installation
89
-
90
-
The Prometheus & Grafana Marketplace deployment renders a standalone server instance. Follow the steps below to deploy it:
91
-
92
-
1. Choose the **Marketplace** option from the left menu in the Cloud Manager.
93
-
94
-
1. Select **Prometheus & Grafana** from the Marketplace menu. See our [Deploy Prometheus and Grafana through the Linode Marketplace](/docs/products/tools/marketplace/guides/grafana/) for full deployment instructions. Below are some options to consider:
95
-
96
-
-**Select an Image:** This example uses an Ubuntu 22.04 LTS instance.
97
-
98
-
-**Linode Plan:** A Nanode 1 GB plan is suitable for this example stack.
1. During deployment, a Let’s Encrypt TLS certificate is installed. This allows you to access the instance via `HTTPS`in a web browser. When fully installed, the Grafana settings menu provides fields to connect to the Loki/Fluentd combination:
135
-
136
-

137
-
138
80
## Fluentd Installation
139
81
140
-
Fluentd gathers log instances via Fluentd and plugins. This example uses a Ruby gem version of Fluentd onto a Nanode. The commands below install the build tools, Ruby and its development libraries, and Fluentd:
82
+
Fluentd gathers log instances via Fluentd and plugins. This example uses a Ruby gem version of Fluentd onto the Prometheus & Grafana Nanode. The commands below install Ruby, along with its development libraries, and Fluentd.
141
83
142
84
1. Update and upgrade the Ubuntu system, then restart the Nanode:
143
85
@@ -147,12 +89,6 @@ Fluentd gathers log instances via Fluentd and plugins. This example uses a Ruby
147
89
148
90
The instance receives updates and upgrades, then reboots to ensure future revision sync with subsequent items. This is required.
149
91
150
-
1. Install the `build-essential` package for Fluentd and its dependencies:
151
-
152
-
```command
153
-
sudo apt install build-essential
154
-
```
155
-
156
92
1. Install Ruby and its development libraries:
157
93
158
94
```command
@@ -225,12 +161,4 @@ The web browser interface allows you to select specific time frames and fields t
225
161
226
162

227
163
228
-
These messages are correlated from log sources originating from `/var/log/` information across the monitored sample instances and Kubernetes pod.
229
-
230
-
## Conclusion
231
-
232
-
The Fluentd/Loki combination excels in handling diverse log source streams and efficiently archiving log data. The Prometheus/Grafana combination serves as the log store archive and central hub for visualizing time-series events across various log sources, whether discrete instances or Kubernetes pods.
233
-
234
-
Prometheus catches both persistent and ephemeral log data as instances (or pods) are instantiated or terminated. Ephemeral log data comes from pods that start and terminate. This data would otherwise not leave a trace in a log-polling environment, as pods go in and out of existence through production service cycles.
235
-
236
-
You can adapt this example and deploy similar configurations across different systems domains to provide comprehensive tracking and correlation of data streams through a centralized console.
164
+
These messages are correlated from log sources originating from `/var/log/` information across monitored instances and a Kubernetes pod.
0 commit comments