Skip to content

Allow port to be configurable #218

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Jun 17, 2018
Merged

Conversation

myii
Copy link
Contributor

@myii myii commented Jun 8, 2018

References

  1. Add support for multiple clusters / versions #174
  2. Convert the cmd.run for creating clusters to using the cluster execution module #43 (comment)

Discussion

I've been using this formula for a while now and like the comment linked above, I've been able to use db_port to adapt to clusters not using the default port of 5432.

The implementation does not affect existing configurations. The non-standard port would have to be explicitly provided in the pillar.

Changing the port requires a service restart, so the existing service reload state doesn't work -- hence using service.running again. More could be done to check the various situations but I preferred simplicity to efficiency, at least when introducing this functionality.

I envisage that this is a useful first step towards adding support for multiple clusters / versions (i.e. #174).

Concerns

The will probably necessitate modification of:

Regex description

- `^`        Line start
- `#*\s*`    Find line even if commented out
- `(port)`   'port' -- capture as backreference `\1`
- `\s*=\s*`  Equals sign, whether or not surrounded by whitespace
- `\d{4,5}`  Existing port value, expected at 4/5 digits
- `(.*)`     Remainder (i.e. comment) -- capture as backreference `\2`
- `$`        Line end

Regex:

- `^`        Line start
- `#*\s*`    Find line even if commented out
- `(port)`   'port' -- capture as backreference `\1`
- `\s*=\s*`  Equals sign, whether or not surrounded by whitespace
- `\d{4,5}`  Existing port value, expected at 4/5 digits
- `(.*)`     Remainder (i.e. comment) -- capture as backreference `\2`
- `$`        Line end
@myii
Copy link
Contributor Author

myii commented Jun 9, 2018

Current deficiencies

One way this method could be disadvantageous is that there is no comment/warning that the port is being managed by Salt. Couple of suggestions how this could be resolved:

  1. Simply add a comment above the port setting to warn that it is being managed by Salt
    1. Would have to be careful that this doesn't keep getting set which each subsequent run
    2. Probably safer to just append the comment on the same line, instead of backreference \2
  2. Another solution could be to disable the port in the main body and then append the setting using a file.blockreplace, like is being done for listen_addresses

@myii
Copy link
Contributor Author

myii commented Jun 10, 2018

Comment appended to same line

Used the simplest solution to append the warning to the same line:

--- 
+++ 
@@ -60,7 +60,7 @@
                                      # comma-separated list of addresses;
                                      # defaults to 'localhost'; use '*' for all
                                      # (change requires restart)
-port = 5433                          # (change requires restart)
+port = 5433                          # Managed by SaltStack: please do not edit
 max_connections = 100                # (change requires restart)
 #superuser_reserved_connections = 3  # (change requires restart)
 unix_socket_directories = '/var/run/postgresql'      # comma-separated list of directories

* Regex modified:
  - Ensure whitespace before comment is maintained
  - Consistent with surrounding lines in the file
* Used YAML pipe `>-` due to the colon-space (`: `) in the comment
Copy link
Contributor

@vutny vutny left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@myii Thanks for your PR!

Adding support for non-standard PG port is a tricky task here. You have did that well, but I hope you might consider another way, which should be more solid comparing to regex magic :)

  1. Salt uses it's own configuration for transparent work with PostgreSQL, which could be exposed via Pillar. This allows to enable custom port usage in all other postgres modules right away.

  2. And you can reuse this setting to append to the content in postgresql-conf state directly in the SLS if it exists. Since the port setting is commented in the PG distribution, this is rather safe.

I have added some details inline.

pillar.example Outdated
# Non-standard port to use for the cluster
# Only set if port `5432` is not appropriate
port: 5433

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With what I said above, we could provide top level postgres.port Pillar setting. This would be reused directly in the server SLS and automatically enable correct port resolution for all Salt routines related to PostgreSQL.

But just better to keep it default mentioning it could be changed.

@@ -23,6 +23,7 @@
{{ state }}-{{ name }}:
{{ state }}.{{ ensure|default('present') }}:
{{- format_kwargs(kwarg) }}
- db_port: {{ postgres.port|default(postgres.default_port) }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With setting postgres.port this would be unnecessary.

- service: postgresql-port

service.running:
- name: {{ postgres.service }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think having two states handling a single service is not that good idea, since this could be very easy overlooked in further development...

But I understand we'd have to deal with port setting could be defined outside of Managed by SaltStack protected zone. So we can add a file.comment state with rather simple regex to force commenting out string matching /port = .+/ only when postgres.port is defined and there would be changes in postgresql-conf:

{%- if salt['pillar.get']('postgres.port') %}

postgresql-alter-port:
  file.comment:
    - name: {{ postgres.conf_dir }}/postgresql.conf
    - regex: ^port = .+
    - prereq:
      - file: postgresql-conf

{%- endif %}

@myii
Copy link
Contributor Author

myii commented Jun 11, 2018

@vutny Thanks for your feedback. I'll get back to you once I get the chance to go over the points you've raised.

@myii
Copy link
Contributor Author

myii commented Jun 12, 2018

@vutny Apologies for the delay, needed to find some free time to work on this. Changes should be clear enough, since mainly an incorporation of your suggestions.

@@ -34,7 +38,7 @@ postgres:
# POSTGRES
# Append the lines under this item to your postgresql.conf file.
# Pay attention to indent exactly with 4 spaces for all lines.
postgresconf: |
postgresconf: |-
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Otherwise an unnecessary blank line rendered, more noticeable if both listen_addresses and port are given.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@@ -86,23 +86,43 @@ postgresql-config-dir:
- require:
- cmd: postgresql-cluster-prepared

{%- if postgres.postgresconf %}
{%- set db_port = salt['config.option']('postgres.port') %}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of the suggested:

{%- if salt['pillar.get']('postgres.port') %}
  • config.option gets the value from the minion configuration first, otherwise the pillar (which then works correctly with the postgres.manage states)
  • Same as the method used in the code for salt.modules.postgres

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome!


{%- endif %}

{%- if postgres.postgresconf or db_port %}

postgresql-conf:
file.blockreplace:
- name: {{ postgres.conf_dir }}/postgresql.conf
- marker_start: "# Managed by SaltStack: listen_addresses: please do not edit"
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Obviously can't change this marker to mention that the managed block is also for the port.

{%- set db_port = salt['config.option']('postgres.port') %}
{%- if db_port %}

postgresql-conf-comment-port:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since the port setting is commented in the PG distribution, this is rather safe.

Unfortunately, the port is set as part of the installation process, at least on Ubuntu.

So commenting out all and then letting the file.blockreplace below do its job. Works without changes for subsequent runs.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, it appears that file.comment state is more intelligent than I thought, and it does not comment out other strings matching same pattern when encounter the first one already commented. Interesting, I was not aware of that...

@@ -184,7 +204,7 @@ postgresql-running:
service.running:
- name: {{ postgres.service }}
- enable: True
{% if grains.os not in ('MacOS',) %}
{% if grains.os not in ('MacOS',) and not db_port %}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the simplest way to reuse the same state but I'm sure it could be improved. What is the right way to only restart the service when the port has been changed?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Technically, many settings inside postgresql.confrequired full restart of PG cluster, listen_address and port are just obvious ones.

Probably we could use separate module.run state to force restart when necessary, but better leave it for another PR addressing this issue.

Copy link
Contributor

@vutny vutny left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great job @myii !

I have only one minor suggestion left. Please take a look at the bottom comment.

@@ -34,7 +38,7 @@ postgres:
# POSTGRES
# Append the lines under this item to your postgresql.conf file.
# Pay attention to indent exactly with 4 spaces for all lines.
postgresconf: |
postgresconf: |-
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@@ -86,23 +86,43 @@ postgresql-config-dir:
- require:
- cmd: postgresql-cluster-prepared

{%- if postgres.postgresconf %}
{%- set db_port = salt['config.option']('postgres.port') %}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome!

@@ -184,7 +204,7 @@ postgresql-running:
service.running:
- name: {{ postgres.service }}
- enable: True
{% if grains.os not in ('MacOS',) %}
{% if grains.os not in ('MacOS',) and not db_port %}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Technically, many settings inside postgresql.confrequired full restart of PG cluster, listen_address and port are just obvious ones.

Probably we could use separate module.run state to force restart when necessary, but better leave it for another PR addressing this issue.

- watch_in:
- service: postgresql-running
- service: postgresql-running
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🙀

{%- set db_port = salt['config.option']('postgres.port') %}
{%- if db_port %}

postgresql-conf-comment-port:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, it appears that file.comment state is more intelligent than I thought, and it does not comment out other strings matching same pattern when encounter the first one already commented. Interesting, I was not aware of that...

@@ -184,7 +204,7 @@ postgresql-running:
service.running:
- name: {{ postgres.service }}
- enable: True
{% if grains.os not in ('MacOS',) %}
{% if grains.os not in ('MacOS',) and not db_port %}
- reload: True
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we could do something like:

     - reload: {{ not postgres.postgresconf and not db_port and not db_port == '5432' }}

@myii
Copy link
Contributor Author

myii commented Jun 12, 2018

@vutny I've opted for reintroducing a separated state for restarting the service. Every other Jinja-based method is either not 100% reliable (won't restart when it needs to) or results in unnecessary restarts. Some examples:

  1. [Unreliable] With current code, doesn't restart for other changes made to postgresql.conf
  2. [Unnecessary] With suggested edit ({{ not postgres.postgresconf and not db_port and not db_port == '5432' }}) -- which incidentally I believe would always return True False because of the last two conditions -- would results in restarts, even if only modifying one of the other files, such as the acls in pg_hba.conf
  3. [Unreliable] If the suggested edit was modified to ({{ not postgres.postgresconf and not db_port == '5432' }}), would still have an issue where it would reload instead of restarting if switching to a non-standard port

I'm sure there are more examples. What the real issue here is that the Jinja isn't 100% deterministic, so will always cause one issue or the other. The only way of being definitive is to use one state for watching when reloads can take place and another to watch for when restarts should take place. Keeping the reload/restart states next to each other will minimise developmental issues. Using requisites (i.e. watch) instead of Jinja is better practice anyway.

Unless there is some way of:

If the "result" of the watched state is True and the "changes" key contains a populated dictionary (changes occurred in the watched state)...

  • Accessing the running dict and checking for the changes there

  • Or if there was a way of knowing which of the watch entries was triggering the service.running state

service.running:
- name: {{ postgres.service }}
{%- endif %}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@myii I like the idea of using onchanges requisite much better. We could do something like this.

{%- if postgres.postgresconf or db_port %}

postgresql-service-restart:
  module.run:
    - name: service.restart
    - m_name: {{ postgres.service }}
    - onchanges:
      - file: postgresql-conf

{%- endif %}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@vutny Of course that can be done. I have a couple of questions:

  1. I'm curious: while I've used module.run for my own formulas, I've been under the impression that it should only be used as a last resort, when there isn't an available state -- for my understanding, would you mind explaining the advantage of using this instead?
  2. To save having to keep the Jinja ({%- if postgres.postgresconf or db_port %} in sync with the one being used above, how about moving this into that block above, i.e.:
{%- if postgres.postgresconf or db_port %}

postgresql-conf:
  file.blockreplace:
    - name: {{ postgres.conf_dir }}/postgresql.conf
    - marker_start: "# Managed by SaltStack: listen_addresses: please do not edit"
    - marker_end: "# Managed by SaltStack: end of salt managed zone --"
    - content: |
        {%- if postgres.postgresconf %}
        {{ postgres.postgresconf|indent(8) }}
        {%- endif %}
        {%- if db_port %}
        port = {{ db_port }}
        {%- endif %}
    - show_changes: True
    - append_if_not_found: True
    {#- Detect empty values (none, '') in the config_backup #}
    - backup: {{ postgres.config_backup|default(false, true) }}
    - require:
      - file: postgresql-config-dir
      {%- if db_port %}
      - file: postgresql-conf-comment-port
      {%- endif %}

  module.run:
    - name: service.restart
    - m_name: {{ postgres.service }}
    - onchanges:
      - file: postgresql-conf

{%- endif %}

Further advantages of less lines and keeping logically related states together.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@myii and @vutny what about the apache-formula's way

instead module.run:

apache-reload:
  module.wait:
    - name: service.reload
    - m_name: {{ apache.service }}

and then have the following in the file state:

    - watch_in:
      - module: apache-reload

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@aboe76 Unfortunately, can't use listen/listen_in because the service needs to be restarted before the postgres.manage states run.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@aboe76 @vutny module.wait + watch/watch_in looks like an improvement, thanks for that @aboe76. However, the other issues raised above still remain. At the very least, where should this be positioned? In the same {%- if postgres.postgresconf or db_port %} block as shown above or at the bottom next to service.running?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@myii I think it is OK to group together postgresql-conf and future postgres-restart states, just better name them separately, because that's a good practice.

In case of initial installation, when PG cluster is not running yet, the restart state would simply bring it up. Later postgresql-running state will just do reload when necessary, that's fast and safe. And it would bring the service back in case of any earlier failures. But this will prevent from unnecessary restarts then there would be no changes to postgresql.conf file.

With that said, your approach is good. Please update the PR and I'm sure we'll merge it in.
Thank you for your contribution!

@myii
Copy link
Contributor Author

myii commented Jun 16, 2018

@vutny @aboe76 So I've applied those changes using module.wait + watch_in. Thanks for all of your feedback.


Using the work in this PR, I've been able to adapt the formula to work with multiple clusters/versions. While this has far-reaching impact the implementation is relatively simple, primarily wrapping the existing code in for loops and appending the port to state/requisite identifiers. Designed to have no effect on existing configurations, strictly opt-in.

At this stage, the implementation is rudimentary, i.e. "works for me". Is there any appetite for including this functionality? If there is, I'll start a new PR where it can be discussed further.

@aboe76
Copy link
Contributor

aboe76 commented Jun 16, 2018

@vutny would you like to take a second look?

Copy link
Contributor

@vutny vutny left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@aboe76 This is good to go.
Thanks for your dedication @myii !

@aboe76 aboe76 merged commit 844ae97 into saltstack-formulas:master Jun 17, 2018
@aboe76
Copy link
Contributor

aboe76 commented Jun 17, 2018

Done

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants