Skip to content

Document systemd instantiated services from template unit file for multi-cluster environments #164

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
lippserd opened this issue Jan 16, 2025 · 3 comments · Fixed by #172
Milestone

Comments

@lippserd
Copy link
Member

lippserd commented Jan 16, 2025

A single Icinga for Kubernetes installation should be capable of monitoring multi-cluster environments by utilizing instantiated systemd services, such as icinga-kubernetes@cluster1 and icinga-kubernetes@cluster2. Please investigate how these service files are structured and how they can be utilized. Our documentation should include examples. Packaging will need adjustments to accommodate this, but you can rely on that being handled.

@jhoxhaa, @jrauh01 You can both research on that topic.

@lippserd lippserd added this to the 0.3.0 milestone Jan 16, 2025
@jrauh01
Copy link
Collaborator

jrauh01 commented Jan 20, 2025

I think we could do something like this.

Service file /usr/lib/systemd/system/[email protected]:

[Unit]
Description=Icinga for Kubernetes Monitoring (%i)
After=network.target

[Service]
Type=simple
Environment="KUBECONFIG=/home/jrauh/.kube/config"
ExecStart=/usr/bin/icinga-kubernetes --config /etc/icinga-kubernetes/config-%i.yml
Restart=on-failure

[Install]
WantedBy=multi-user.target

The config files for each instance would be stored in /etc/icinga-kubernetes. The different instances would be started via:

  • systemctl start icinga-kubernetes@cluster1
  • systemctl start icinga-kubernetes@cluster2
  • ...

@lippserd
Copy link
Member Author

How would I manage all instances, e.g. systemctl stop icinga-kubernetes. How can we define a default? I don't want users to create an instance first since we also have a default config file. I think it would be best to also explain/document our environment variables. Also, our config does not define how to connect to Kubernetes.

@jrauh01
Copy link
Collaborator

jrauh01 commented Jan 20, 2025

To manage all started instances following commands could be used:

  • systemctl status 'icinga-kubernetes@*'
  • systemctl stop 'icinga-kubernetes@*'

The default configuration /etc/icinga-kubernetes/config.yml should be provide at least basic config like database and logging. The specific config for each instance like kubeconfig and cluster name should be adjusted in .env files like cluster1.yml, cluster2.yml also in /etc/icinga-kubernetes. The updated service file could look like this:

[Unit]
Description=Icinga for Kubernetes
After=syslog.target network-online.target mariadb.service postgresql.service

[Service]
Type=simple
EnvironmentFile=/etc/icinga-kubernetes/%i.env
ExecStart=/usr/sbin/icinga-kubernetes --config /etc/icinga-kubernetes/config.yml --cluster-name $ICINGA_FOR_KUBERNETES_CLUSTER_NAME
User=icinga-kubernetes

[Install]
WantedBy=multi-user.target

An example .env file could look like this:

ICINGA_FOR_KUBERNETES_CLUSTER_NAME=cluster1
ICINGA_FOR_KUBERNETES_PROMETHEUS_URL=http://localhost:9090
KUBECONFIG=/home/jrauh/.kube/config

For the documentation of our environment variables I would suggest an entry in doc/03-Configuration.md. The config how to connect to Kubernetes is provided by the kubeconfig if I'm not mistaken?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants