Role - edpm_container_manage

Usage

Note that right now, only Podman is supported by this role.

This Ansible role allows to do the following tasks:

  • Collect container configs data. This data is used as a source of truth on which configuration we expect to apply with this role. It means that if a container is already managed by this role, no matter its state now, the configs data will reconfigure the container if needed.

  • Manage systemd shutdown files. It takes care of creating the EDPM Container systemd service, required for service ordering when it comes to shutdown or start a node. It also manages the netns-placeholder service.

  • Delete containers that aren’t needed anymore or that will need to be re-configured. It uses a custom filter, named needs_delete() which has a set of rules which allow to determine if whether or not the container needs to be deleted. These reasons will make the containers not deleted:

    • The container is not managed by edpm_ansible.

    • The container config_id doesn’t match with the one in input.

    • The container’s name starts with “ceph”. Ceph containers are not managed by this role.

    Once the previous conditions checked, then these reasons will make the containers deleted:

    • The container has no config_data.

    • The container has a config_data which doesn’t match the one in input.

    Note that when a container is removed, the role also disable and remove the systemd services and healtchecks if present.

  • Create containers in a specific order defined by start_order container config, where default is 0.

    • If the container is an exec, we’ll run a dedicated playbook for execs, using async so multiple execs can be run at the same time.

    • Otherwise, the podman_container is used, in async, to create the containers. If the container has a restart policy, we’ll configure the systemd service. If the container has a healthcheck script, we’ll configure the systemd healthcheck service.

    Note: edpm_container_manage_concurrency parameter is set to 1 by default, and putting higher value than 2 can be expose issue with Podman locks.

    Here is an example of a playbook:

- name: Manage step_1 containers using edpm-ansible
  block:
    - name: "Manage containers for step 1 with edpm-ansible"
      include_role:
        name: edpm_container_manage
      vars:
        edpm_container_manage_config: "/var/lib/edpm-config/container-startup-config/step_1"
        edpm_container_manage_config_id: "edpm_step1"

Healthchecks

Previously, the container healthcheck was implemented by a systemd timer which would run podman exec to determine if a given container was healthy.. Now, we are using the native healthcheck interface in Podman; which is easier to integrate and consume.

We are now using the native healthcheck interface in Podman; which is easier to integrate with and consume.

To check if a container (e.g. keystone) is healthy, run the following command:

$ sudo podman healthcheck run keystone

The return code should be 0 and “healthy” should be printed as the output. One can also use the podman inspect keystone output to figure out that the healthcheck is periodically running and healthy:

"Health": {
    "Status": "healthy",
    "FailingStreak": 0,
    "Log": [
        {
            "Start": "2020-04-14T18:48:57.272180578Z",
            "End": "2020-04-14T18:48:57.806659104Z",
            "ExitCode": 0,
            "Output": ""
        },
        (...)
    ]
}

Debug

The role allows you to perform specific actions on a given container. This can be used to:

  • Run a container with a specific one-off configuration.

  • Output the container commands that are run to to manage containers lifecycle.

  • Output the changes that would have been made on containers by Ansible.

Note

To manage a single container, you need to know 2 things:

  • At which step the container is deployed.

  • The name of the generated JSON file for container config.

Here is an example of a playbook to manage HAproxy container at step 1 which overrides the image setting in one-off.

- hosts: localhost
  become: true
  tasks:
    - name: Manage step_1 containers using edpm-ansible
      block:
        - name: "Manage HAproxy container at step 1 with edpm-ansible"
          include_role:
            name: edpm_container_manage
          vars:
            edpm_container_manage_config_patterns: 'haproxy.json'
            edpm_container_manage_config: "/var/lib/edpm-config/container-startup-config/step_1"
            edpm_container_manage_config_id: "edpm_step1"
            edpm_container_manage_clean_orphans: false
            edpm_container_manage_config_overrides:
              haproxy:
                image: quay.io/edpmmastercentos9/centos-binary-haproxy:hotfix

If Ansible is run in check mode, no container will be removed nor created, however at the end of the playbook a list of commands will be displayed to show what would have been run. This is useful for debug purposes.

$ ansible-playbook haproxy.yaml --check

Adding the diff mode will output the changes what would have been made on containers by Ansible.

$ ansible-playbook haproxy.yaml --check --diff

The edpm_container_manage_clean_orphans parameter is optional and can be set to false to not clean orphaned containers for a config_id. It can be used to manage a single container without impacting other running containers with same config_id.

The edpm_container_manage_config_overrides parameter is optional and can be used to override a specific container attribute like the image or the container user. The parameter takes a dictionary where each key is the container name and its parameters that we want to override. These parameters have to exist and are the ones that define the container configuration. Note that it doesn’t write down the overrides in the JSON file so if an update / upgrade is executed, the container will be re-configured with the configuration that is in the JSON file.

osp.edpm.edpm_container_manage role – The main entry point for the edpm_container_manage role.

Entry point main – The main entry point for the edpm_container_manage role.

Synopsis
Parameters

Parameter

Comments

edpm_container_manage_clean_orphans

boolean

Choices:

  • false

  • true ← (default)

edpm_container_manage_cli

string

Default: "podman"

edpm_container_manage_concurrency

integer

Default: 1

edpm_container_manage_config

string

Default: "/var/lib/edpm-config/"

edpm_container_manage_config_id

string

Default: "edpm"

edpm_container_manage_config_overrides

dictionary

Default: {}

edpm_container_manage_config_patterns

string

Default: "*.json"

edpm_container_manage_healthcheck_disabled

boolean

Choices:

  • false ← (default)

  • true

edpm_container_manage_hide_sensitive_logs

string

Default: "{{ hide_sensitive_logs | default(true) }}"

edpm_container_manage_log_path

string

Default: "/var/log/containers/stdouts"

edpm_container_manage_systemd_teardown

boolean

Choices:

  • false

  • true ← (default)

edpm_container_manage_update_config_hash

boolean

Choices:

  • false

  • true ← (default)