Skip to content

Commit 4b28f04

Browse files
committed
Add mdox link checking and formatting
Signed-off-by: Saswata Mukherjee <[email protected]>
1 parent d304301 commit 4b28f04

27 files changed

+264
-302
lines changed

ADOPTERS.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
2-
title: "Adopters"
3-
date: 2021-03-08T23:50:39+01:00
2+
title: Adopters
43
draft: false
4+
date: "2021-03-08T23:50:39+01:00"
55
---
66

77
<!--
@@ -27,7 +27,7 @@ This document tracks people and use cases for the Prometheus Operator in product
2727

2828
Go ahead and [add your organization](https://github.com/prometheus-operator/prometheus-operator/edit/master/ADOPTERS.md) to the list.
2929

30-
## Clyso
30+
## Clyso
3131

3232
[clyso.com](https://www.clyso.com/en)
3333

@@ -83,7 +83,7 @@ Details:
8383
- 20000 samples/s
8484
- 1M active series
8585

86-
## Innovaccer ##
86+
## Innovaccer
8787

8888
https://innovaccer.com/
8989

CHANGELOG.md

Lines changed: 9 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
No change since v0.51.0.
44

5-
_The CI automation failed to build the v0.51.0 images so we had to create a new patch release._
5+
*The CI automation failed to build the v0.51.0 images so we had to create a new patch release.*
66

77
## 0.51.0 / 2021-09-24
88

@@ -181,11 +181,12 @@ future.
181181

182182
## 0.42.1 / 2020-09-21
183183

184-
* [BUGFIX] Bump client-go to fix watch bug
184+
* [BUGFIX] Bump client-go to fix watch bug
185185

186186
## 0.42.0 / 2020-09-09
187187

188-
The Prometheus Operator now lives in its own independent GitHub organization.
188+
The Prometheus Operator now lives in its own independent GitHub organization.
189+
189190
We have also added a governance (#3398).
190191

191192
* [FEATURE] Move API types out into their own module (#3395)
@@ -230,12 +231,12 @@ We have also added a governance (#3398).
230231

231232
* [CHANGE] Update dependencies to prometheus 2.18 (#3231)
232233
* [CHANGE] Add support for new prometheus versions (v2.18 & v2.19) (#3284)
233-
* [CHANGE] bump Alertmanager default version to v0.21.0 (#3286)
234+
* [CHANGE] bump Alertmanager default version to v0.21.0 (#3286)
234235
* [FEATURE] Automatically disable high availability mode for 1 replica alertmanager (#3233)
235236
* [FEATURE] thanos-sidecar: Add minTime arg (#3253)
236-
* [FEATURE] Add scrapeTimeout as global configurable parameter (#3250)
237-
* [FEATURE] Add EnforcedSampleLimit which enforces a global sample limit (#3276)
238-
* [FEATURE] add ability to exclude rules from namespace label enforcement (#3207)
237+
* [FEATURE] Add scrapeTimeout as global configurable parameter (#3250)
238+
* [FEATURE] Add EnforcedSampleLimit which enforces a global sample limit (#3276)
239+
* [FEATURE] add ability to exclude rules from namespace label enforcement (#3207)
239240
* [BUGFIX] thanos sidecar: log flags double definition (#3242)
240241
* [BUGFIX] Mutate rule labels, annotations to strings (#3230)
241242

@@ -513,7 +514,7 @@ and accepts and comma-separated list of namespaces as a string.
513514
## 0.22.0 / 2018-07-09
514515

515516
* [FEATURE] Allow setting volume name via volumetemplateclaimtemplate in prom and alertmanager (#1538)
516-
* [FEATURE] Allow setting custom tags of container images (#1584)
517+
* [FEATURE] Allow setting custom tags of container images (#1584)
517518
* [ENHANCEMENT] Update default Thanos to v0.1.0-rc.2 (#1585)
518519
* [ENHANCEMENT] Split rule config map mounted into Prometheus if it exceeds Kubernetes config map limit (#1562)
519520
* [BUGFIX] Mount Prometheus data volume into Thanos sidecar & pass correct path to Thanos sidecar (#1583)

CONTRIBUTING.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,20 +1,20 @@
11
---
2-
title: "Contributing"
3-
description: "How can I contribute to the Prometheus Operator and kube-prometheus?"
4-
lead: ""
5-
date: 2021-03-08T08:48:57+00:00
6-
lastmod: 2021-03-08T08:48:57+00:00
7-
draft: false
8-
images: []
9-
menu:
10-
docs:
11-
parent: "prologue"
122
weight: 200
133
toc: true
4+
title: Contributing
5+
menu:
6+
docs:
7+
parent: prologue
8+
lead: ""
9+
lastmod: "2021-03-08T08:48:57+00:00"
10+
images: []
11+
draft: false
12+
description: How can I contribute to the Prometheus Operator and kube-prometheus?
13+
date: "2021-03-08T08:48:57+00:00"
1414
---
1515

1616
This project is licensed under the [Apache 2.0 license](LICENSE) and accept
17-
contributions via GitHub pull requests. This document outlines some of the
17+
contributions via GitHub pull requests. This document outlines some of the
1818
conventions on development workflow, commit message formatting, contact points
1919
and other resources to make it easier to get your contribution accepted.
2020

Documentation/additional-scrape-config.md

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -5,8 +5,7 @@ additional Prometheus scrape configurations. Scrape configurations specified
55
are appended to the configurations generated by the Prometheus Operator.
66

77
Job configurations specified must have the form as specified in the official
8-
[Prometheus documentation](
9-
https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config).
8+
[Prometheus documentation](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config).
109
As scrape configs are appended, the user is responsible to make sure it is
1110
valid. *Note* that using this feature may expose the possibility to break
1211
upgrades of Prometheus.
@@ -17,7 +16,7 @@ scrape configs are going to break Prometheus after the upgrade.
1716
## Creating an additional configuration
1817

1918
First, you will need to create the additional configuration.
20-
Below we are making a simple "prometheus" config. Name this
19+
Below we are making a simple "prometheus" config. Name this
2120
`prometheus-additional.yaml` or something similar.
2221

2322
```yaml
@@ -62,5 +61,5 @@ NOTE: Use only one secret for ALL additional scrape configurations.
6261
6362
## Additional References
6463
65-
* [Prometheus Spec](api.md#prometheusspec)
66-
* [Additional Scrape Configs](../example/additional-scrape-configs)
64+
* [Prometheus Spec](api.md#prometheusspec)
65+
* [Additional Scrape Configs](../example/additional-scrape-configs)

Documentation/custom-configuration.md

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3,9 +3,7 @@
33
<i class="fa fa-exclamation-triangle"></i><b> Note:</b> Starting with v0.39.0, Prometheus Operator requires use of Kubernetes v1.16.x and up.
44
</div>
55

6-
7-
**Deprecation Warning:** The _custom configuration_ option of the Prometheus Operator will be deprecated in favor of the [_additional scrape config_](additional-scrape-config.md) option.
8-
6+
**Deprecation Warning:** The *custom configuration* option of the Prometheus Operator will be deprecated in favor of the [*additional scrape config*](additional-scrape-config.md) option.
97

108
# Custom Configuration
119

Documentation/design.md

Lines changed: 12 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,15 @@
11
---
2-
title: "Design"
3-
description: "This document describes the design and interaction between the custom resource definitions that the Prometheus Operator introduces."
4-
lead: ""
5-
date: 2021-03-08T08:49:31+00:00
6-
draft: false
7-
images: []
8-
menu:
9-
docs:
10-
parent: "operator"
112
weight: 100
123
toc: true
4+
title: Design
5+
menu:
6+
docs:
7+
parent: operator
8+
lead: ""
9+
images: []
10+
draft: false
11+
description: This document describes the design and interaction between the custom resource definitions that the Prometheus Operator introduces.
12+
date: "2021-03-08T08:49:31+00:00"
1313
---
1414

1515
This document describes the design and interaction between the custom resource definitions that the Prometheus Operator introduces.
@@ -35,7 +35,6 @@ The CRD specifies which `ServiceMonitor`s should be covered by the deployed Prom
3535

3636
If no selection of `ServiceMonitor`s is provided, the Operator leaves management of the `Secret` to the user, which allows to provide custom configurations while still benefiting from the Operator's capabilities of managing Prometheus setups.
3737

38-
3938
## Alertmanager
4039

4140
The `Alertmanager` custom resource definition (CRD) declaratively defines a desired Alertmanager setup to run in a Kubernetes cluster. It provides options to configure replication and persistent storage.
@@ -44,15 +43,13 @@ For each `Alertmanager` resource, the Operator deploys a properly configured `St
4443

4544
When there are two or more configured replicas the operator runs the Alertmanager instances in high availability mode.
4645

47-
4846
## ThanosRuler
4947

5048
The `ThanosRuler` custom resource definition (CRD) declaratively defines a desired [Thanos Ruler](https://github.com/thanos-io/thanos/blob/master/docs/components/rule.md) setup to run in a Kubernetes cluster. With Thanos Ruler recording and alerting rules can be processed across multiple Prometheus instances.
5149

52-
A `ThanosRuler` instance requires at least one `queryEndpoint` which points to the location of Thanos Queriers or Prometheus instances. The `queryEndpoints` are used to configure the `--query` arguments(s) of the Thanos runtime.
50+
A `ThanosRuler` instance requires at least one `queryEndpoint` which points to the location of Thanos Queriers or Prometheus instances. The `queryEndpoints` are used to configure the `--query` arguments(s) of the Thanos runtime.
5351
Further information can also be found in the [Thanos doc](thanos.md).
5452

55-
5653
## ServiceMonitor
5754

5855
The `ServiceMonitor` custom resource definition (CRD) allows to declaratively define how a dynamic set of services should be monitored. Which services are selected to be monitored with the desired configuration is defined using label selections. This allows an organization to introduce conventions around how metrics are exposed, and then following these conventions new services are automatically discovered, without the need to reconfigure the system.
@@ -69,13 +66,13 @@ The `endpoints` section of the `ServiceMonitorSpec`, is used to configure which
6966
7067
Both `ServiceMonitors` as well as discovered targets may come from any namespace. This is important to allow cross-namespace monitoring use cases, e.g. for meta-monitoring. Using the `ServiceMonitorNamespaceSelector` of the `PrometheusSpec`, one can restrict the namespaces `ServiceMonitor`s are selected from by the respective Prometheus server. Using the `namespaceSelector` of the `ServiceMonitorSpec`, one can restrict the namespaces the `Endpoints` objects are allowed to be discovered from.
7168
To discover targets in all namespaces the `namespaceSelector` has to be empty:
69+
7270
```yaml
7371
spec:
7472
namespaceSelector:
7573
any: true
7674
```
7775
78-
7976
## PodMonitor
8077
8178
The `PodMonitor` custom resource definition (CRD) allows to declaratively define how a dynamic set of pods should be monitored.
@@ -91,26 +88,23 @@ The `PodMetricsEndpoints` section of the `PodMonitorSpec`, is used to configure
9188
Both `PodMonitors` as well as discovered targets may come from any namespace. This is important to allow cross-namespace monitoring use cases, e.g. for meta-monitoring.
9289
Using the `namespaceSelector` of the `PodMonitorSpec`, one can restrict the namespaces the `Pods` are allowed to be discovered from.
9390
To discover targets in all namespaces the `namespaceSelector` has to be empty:
91+
9492
```yaml
9593
spec:
9694
namespaceSelector:
9795
any: true
9896
```
9997

100-
10198
## Probe
10299

103100
The `Probe` custom resource definition (CRD) allows to declarative define how groups of ingresses and static targets should be monitored. Besides the target, the `Probe` object requires a `prober` which is the service that monitors the target and provides metrics for Prometheus to scrape. This could be for example achieved using the [blackbox exporter](https://github.com/prometheus/blackbox_exporter/).
104101

105-
106102
## PrometheusRule
107103

108104
The `PrometheusRule` custom resource definition (CRD) declaratively defines a desired Prometheus rule to be consumed by one or more Prometheus instances.
109105

110106
Alerts and recording rules can be saved and applied as YAML files, and dynamically loaded without requiring any restart.
111107

112-
113108
## AlertmanagerConfig
114109

115110
The `AlertmanagerConfig` custom resource definition (CRD) declaratively specifies subsections of the Alertmanager configuration, allowing routing of alerts to custom receivers, and setting inhibit rules. The `AlertmanagerConfig` can be defined on a namespace level providing an aggregated config to Alertmanager. An example on how to use it is provided [here](../example/user-guides/alerting/alertmanager-config-example.yaml). Please be aware that this CRD is not stable yet.
116-

Documentation/exposing-metrics.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ Not all software is natively instrumented with Prometheus metrics, but still rec
1313

1414
Exporters can generally be divided into two categories:
1515

16-
* Instance exporters: These expose metrics about a single instance of an application. For example the HTTP requests that a single HTTP server has exporters served. These exporters are deployed as a [side-car](http://blog.kubernetes.io/2015/06/the-distributed-system-toolkit-patterns.html) container in the same pod as the actual instance of the respective application. A real life example is the [`dnsmasq` metrics sidecar](https://github.com/kubernetes/dns/blob/master/docs/sidecar/README.md), which converts the proprietary metrics format communicated over the DNS protocol by `dnsmasq` to the Prometheus exposition format and exposes it on an HTTP server.
16+
* Instance exporters: These expose metrics about a single instance of an application. For example the HTTP requests that a single HTTP server has exporters served. These exporters are deployed as a [side-car](http://blog.kubernetes.io/2015/06/the-distributed-system-toolkit-patterns.html) container in the same pod as the actual instance of the respective application. A real life example is the [`dnsmasq` metrics sidecar](https://github.com/kubernetes/dns/blob/master/docs/sidecar/README.md), which converts the proprietary metrics format communicated over the DNS protocol by `dnsmasq` to the Prometheus exposition format and exposes it on an HTTP server.
1717

1818
* Cluster-state exporters: These expose metrics about an entire system. For example these could be the number of 3D objects in a game, or metrics about a Kubernetes deployment. These exporters are typically deployed as a normal Kubernetes deployment, but can vary depending on the nature of the particular exporter. A real life example of this is the [`kube-state-metrics`](https://github.com/kubernetes/kube-state-metrics) exporter, which exposes metrics about the cluster state of a Kubernetes cluster.
1919

Documentation/high-availability.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,15 @@
11
---
2-
title: "High Availability"
3-
description: "High Availability is a must for the monitoring infrastructure."
4-
lead: ""
5-
date: 2021-03-08T08:49:31+00:00
6-
draft: false
7-
images: []
8-
menu:
9-
docs:
10-
parent: "operator"
112
weight: 300
123
toc: true
4+
title: High Availability
5+
menu:
6+
docs:
7+
parent: operator
8+
lead: ""
9+
images: []
10+
draft: false
11+
description: High Availability is a must for the monitoring infrastructure.
12+
date: "2021-03-08T08:49:31+00:00"
1313
---
1414

1515
High availability is not only important for customer facing software, but if the monitoring infrastructure is not highly available, then there is a risk that operations people are not notified for alerts of the customer facing software. Therefore high availability must be just as thought through for the monitoring stack, as for anything else.
@@ -26,7 +26,7 @@ One of the goals with the Prometheus Operator is that we want to completely auto
2626

2727
The final step of the high availability scheme between Prometheus and Alertmanager is that Prometheus, when an alert triggers, actually fires alerts against *all* instances of an Alertmanager cluster. Prometheus can discover all Alertmanagers through the Kubernetes API.
2828

29-
The Alertmanager, starting with the `v0.5.0` release, ships with a high availability mode. It implements a gossip protocol to synchronize instances of an Alertmanager cluster regarding notifications that have been sent out, to prevent duplicate notifications. It is an AP (available and partition tolerant) system. Being an AP system, means that notifications are guaranteed to be sent at least once.
29+
The Alertmanager, starting with the `v0.5.0` release, ships with a high availability mode. It implements a gossip protocol to synchronize instances of an Alertmanager cluster regarding notifications that have been sent out, to prevent duplicate notifications. It is an AP (available and partition tolerant) system. Being an AP system, means that notifications are guaranteed to be sent at least once.
3030

3131
The Prometheus Operator ensures that Alertmanager clusters are properly configured to run highly available on Kubernetes, and allows easy configuration of Alertmanagers discovery for Prometheus.
3232

Documentation/network-policies.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,7 @@ This example will close all inbound communication on the namespace monitoring, a
1616
First, follow the instructions to [add Calico to an existing Kubernetes cluster](http://docs.projectcalico.org/v1.5/getting-started/kubernetes/installation/).
1717

1818
Next, use the following configuration to deny all the ingress (inbound) traffic.
19+
1920
```yaml
2021
apiVersion: networking.k8s.io/v1
2122
kind: NetworkPolicy
@@ -26,6 +27,7 @@ Next, use the following configuration to deny all the ingress (inbound) traffic.
2627
podSelector:
2728
matchLabels:
2829
```
30+
2931
Save the config file as default-deny-all.yaml and apply the configuration to the cluster using
3032
3133
```sh

Documentation/rbac-crd.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,15 @@
11
---
2-
title: "RBAC for CRDs"
3-
description: "Aggregate permissions on the Prometheus Operator CustomResourceDefinitions."
2+
weight: 420
3+
toc: true
4+
title: RBAC for CRDs
5+
menu:
6+
docs:
7+
parent: operator
48
lead: ""
5-
date: 2021-03-08T08:49:31+00:00
6-
draft: false
79
images: []
8-
menu:
9-
docs:
10-
parent: "operator"
11-
weight: 420 # nice
12-
toc: true
10+
draft: false
11+
description: Aggregate permissions on the Prometheus Operator CustomResourceDefinitions.
12+
date: "2021-03-08T08:49:31+00:00"
1313
---
1414

1515
## Aggregated ClusterRoles
@@ -18,11 +18,11 @@ It can be useful to aggregate permissions on the Prometheus Operator CustomResou
1818

1919
This can be achieved using ClusterRole aggregation. This lets admins include rules for custom resources, such as those served by CustomResourceDefinitions or Aggregated API servers, on the default roles.
2020

21-
> Note: ClusterRole aggregation is available starting Kubernetes 1.9.
21+
> Note: ClusterRole aggregation is available starting Kubernetes 1.9.
2222
2323
## Example
2424

25-
In order to aggregate _read_ (resp. _edit_) permissions for the Prometheus Operator CustomResourceDefinitions to the `view` (resp. `edit` / `admin`) role(s), a cluster admin can create the `ClusterRole`s below.
25+
In order to aggregate *read* (resp. *edit*) permissions for the Prometheus Operator CustomResourceDefinitions to the `view` (resp. `edit` / `admin`) role(s), a cluster admin can create the `ClusterRole`s below.
2626

2727
This grants:
2828
- Users with `view` role permissions to view the Prometheus Operator CRDs within their namespaces,

0 commit comments

Comments
 (0)