diff --git a/08-ansible-01-base/README.md b/08-ansible-01-base/README.md index 1519ea5e5..4e6709219 100644 --- a/08-ansible-01-base/README.md +++ b/08-ansible-01-base/README.md @@ -2,22 +2,72 @@ ## Подготовка к выполнению 1. Установите ansible версии 2.10 или выше. +![Иллюстрация к ДЗ](https://i.imgur.com/BUeH8km.png) 2. Создайте свой собственный публичный репозиторий на github с произвольным именем. +https://github.com/NamorNinayzuk/mnt-homeworks/edit/MNT-video/08-ansible-01-base 3. Скачайте [playbook](./playbook/) из репозитория с домашним заданием и перенесите его в свой репозиторий. ## Основная часть 1. Попробуйте запустить playbook на окружении из `test.yml`, зафиксируйте какое значение имеет факт `some_fact` для указанного хоста при выполнении playbook'a. +![Иллюстрация к ДЗ](https://i.imgur.com/TMn7fmE.png) 2. Найдите файл с переменными (group_vars) в котором задаётся найденное в первом пункте значение и поменяйте его на 'all default fact'. +![Иллюстрация к ДЗ](https://i.imgur.com//prqbQsl.png) +![Иллюстрация к ДЗ](https://i.imgur.com//0ofOvB1.png) 3. Воспользуйтесь подготовленным (используется `docker`) или создайте собственное окружение для проведения дальнейших испытаний. +![Иллюстрация к ДЗ](https://i.imgur.com//0rGqtc4.png) 4. Проведите запуск playbook на окружении из `prod.yml`. Зафиксируйте полученные значения `some_fact` для каждого из `managed host`. 5. Добавьте факты в `group_vars` каждой из групп хостов так, чтобы для `some_fact` получились следующие значения: для `deb` - 'deb default fact', для `el` - 'el default fact'. +![Иллюстрация к ДЗ](https://i.imgur.com//cBWAHiz.png) 6. Повторите запуск playbook на окружении `prod.yml`. Убедитесь, что выдаются корректные значения для всех хостов. +![Иллюстрация к ДЗ](https://i.imgur.com//3IMEj5y.png) 7. При помощи `ansible-vault` зашифруйте факты в `group_vars/deb` и `group_vars/el` с паролем `netology`. -8. Запустите playbook на окружении `prod.yml`. При запуске `ansible` должен запросить у вас пароль. Убедитесь в работоспособности. -9. Посмотрите при помощи `ansible-doc` список плагинов для подключения. Выберите подходящий для работы на `control node`. -10. В `prod.yml` добавьте новую группу хостов с именем `local`, в ней разместите localhost с необходимым типом подключения. -11. Запустите playbook на окружении `prod.yml`. При запуске `ansible` должен запросить у вас пароль. Убедитесь что факты `some_fact` для каждого из хостов определены из верных `group_vars`. -12. Заполните `README.md` ответами на вопросы. Сделайте `git push` в ветку `master`. В ответе отправьте ссылку на ваш открытый репозиторий с изменённым `playbook` и заполненным `README.md`. +![Иллюстрация к ДЗ](https://i.imgur.com//lwwMQFe.png) +8. Запустите playbook на окружении `prod.yml`. При запуске `ansible` должен запросить у вас пароль. Убедитесь в работоспособности. +![Иллюстрация к ДЗ](https://i.imgur.com//fKvZRf8.png) +10. Посмотрите при помощи `ansible-doc` список плагинов для подключения. Выберите подходящий для работы на `control node`. +ansible-doc -t connection -l +![Иллюстрация к ДЗ](https://i.imgur.com//v8WKvqF.png) +11. В `prod.yml` добавьте новую группу хостов с именем `local`, в ней разместите localhost с необходимым типом подключения. +![Иллюстрация к ДЗ](https://i.imgur.com//pTBqxEg.png) +12. Запустите playbook на окружении `prod.yml`. При запуске `ansible` должен запросить у вас пароль. Убедитесь что факты `some_fact` для каждого из хостов определены из верных `group_vars`. +![Иллюстрация к ДЗ](https://i.imgur.com//Qm8pQ6j.png) +13. Заполните `README.md` ответами на вопросы. Сделайте `git push` в ветку `master`. В ответе отправьте ссылку на ваш открытый репозиторий с изменённым `playbook` и заполненным `README.md`. + + + +Самоконтроль выполнения задания +1. Где расположен файл с some_fact из второго пункта задания? +group_vars/all/examp.yml + +2. Какая команда нужна для запуска вашего playbook на окружении test.yml? +ansible-playbook -i inventory/test.yml site.yml + +3. Какой командой можно зашифровать файл? +ansible-vault encrypt group_vars/el/examp.yml + +4. Какой командой можно расшифровать файл? +ansible-vault decrypt group_vars/el/examp.yml + +5. Можно ли посмотреть содержимое зашифрованного файла без команды расшифровки файла? Если можно, то как? +ansible-vault view group_vars/el/examp.yml + +6. Как выглядит команда запуска playbook, если переменные зашифрованы? +ansible-playbook -i inventory/prod.yml site.yml --ask-vault-pass + +7. Как называется модуль подключения к host на windows? +winrm + +8. Приведите полный текст команды для поиска информации в документации ansible для модуля подключений ssh +ansible-doc -t connection ssh + +9. Какой параметр из модуля подключения ssh необходим для того, чтобы определить пользователя, под которым необходимо совершать подключение? +remote_user + + + + + + ## Необязательная часть diff --git a/08-ansible-01-base/docker-compose.yml b/08-ansible-01-base/docker-compose.yml new file mode 100644 index 000000000..e77a26bd0 --- /dev/null +++ b/08-ansible-01-base/docker-compose.yml @@ -0,0 +1,15 @@ +version: '3' +services: + centos7: + image: pycontribs/centos:7 + container_name: centos7 + restart: unless-stopped + entrypoint: "sleep infinity" + + ubuntu: + image: pycontribs/ubuntu + container_name: ubuntu + restart: unless-stopped + entrypoint: "sleep infinity" + + diff --git a/08-ansible-01-base/playbook/README.md b/08-ansible-01-base/playbook/README.md index e2a380aef..89948721b 100644 --- a/08-ansible-01-base/playbook/README.md +++ b/08-ansible-01-base/playbook/README.md @@ -1,11 +1,20 @@ # Самоконтроль выполненения задания 1. Где расположен файл с `some_fact` из второго пункта задания? +# ./group_vars/all/examp.yml 2. Какая команда нужна для запуска вашего `playbook` на окружении `test.yml`? +# ansible-playbook -i inventory/test.yml site.yml 3. Какой командой можно зашифровать файл? +# ansible-vault encrypt 4. Какой командой можно расшифровать файл? +# ansible-vault decrypt 5. Можно ли посмотреть содержимое зашифрованного файла без команды расшифровки файла? Если можно, то как? +# ansible-vault view 6. Как выглядит команда запуска `playbook`, если переменные зашифрованы? +# ansible-playbook --ask-vault-pass playbook.yml 7. Как называется модуль подключения к host на windows? +# winrm 8. Приведите полный текст команды для поиска информации в документации ansible для модуля подключений ssh +# ansible-doc -t connection ssh 9. Какой параметр из модуля подключения `ssh` необходим для того, чтобы определить пользователя, под которым необходимо совершать подключение? +# remote_user diff --git a/08-ansible-01-base/playbook/group_vars/all/examp.yml b/08-ansible-01-base/playbook/group_vars/all/examp.yml index aae018217..b3d6bef82 100644 --- a/08-ansible-01-base/playbook/group_vars/all/examp.yml +++ b/08-ansible-01-base/playbook/group_vars/all/examp.yml @@ -1,2 +1,5 @@ --- - some_fact: 12 \ No newline at end of file +# some_fact: 12 + some_fact: all default fact + + diff --git a/08-ansible-01-base/playbook/group_vars/deb/examp.yml b/08-ansible-01-base/playbook/group_vars/deb/examp.yml index 11bc6fe03..f5e734659 100644 --- a/08-ansible-01-base/playbook/group_vars/deb/examp.yml +++ b/08-ansible-01-base/playbook/group_vars/deb/examp.yml @@ -1,2 +1,7 @@ ---- - some_fact: "deb" \ No newline at end of file +$ANSIBLE_VAULT;1.1;AES256 +30656231313463313961333531363332636262613333636466613536346539636132326332336536 +3635323533663232313465653866613361666361643234350a646264313465343565373031623361 +35343935343066646535643338306566356636653336616333633563373366386637633230663338 +3938643366396235360a386536663264373961333739633038366466393762303836376331353065 +38363731386334343935633633376630376239393930383337626531393631386232396436383563 +6334353935613365666137303839623237353337656538383637 diff --git a/08-ansible-01-base/playbook/group_vars/el/examp.yml b/08-ansible-01-base/playbook/group_vars/el/examp.yml index 4c066eb64..6015c84bf 100644 --- a/08-ansible-01-base/playbook/group_vars/el/examp.yml +++ b/08-ansible-01-base/playbook/group_vars/el/examp.yml @@ -1,2 +1,8 @@ ---- - some_fact: "el" \ No newline at end of file +$ANSIBLE_VAULT;1.1;AES256 +36633766613265373837313864333732343731646332336630343264633331623761656137656162 +6434643665663536363536643263303537363866326362390a653335343235306539323266343135 +30303838393461333362373862323637353066376230303230316363326166376236613232666632 +6663636365366532350a623739613561663732363235356530383239613366613731643862373662 +63636464643964303763353932666330383863633230383861666666653062333133313232356362 +6164313833316230363033623734393262303064363535653366 + diff --git a/08-ansible-01-base/playbook/inventory/prod.yml b/08-ansible-01-base/playbook/inventory/prod.yml index 56470ed41..d7c2c347d 100644 --- a/08-ansible-01-base/playbook/inventory/prod.yml +++ b/08-ansible-01-base/playbook/inventory/prod.yml @@ -6,4 +6,9 @@ deb: hosts: ubuntu: - ansible_connection: docker \ No newline at end of file + ansible_connection: docker + local: + hosts: + localhost: + ansible_connection: local + diff --git a/08-ansible-01-base/playbook/inventory/test.yml b/08-ansible-01-base/playbook/inventory/test.yml index 93c1550cf..2a9c9335a 100644 --- a/08-ansible-01-base/playbook/inventory/test.yml +++ b/08-ansible-01-base/playbook/inventory/test.yml @@ -2,4 +2,5 @@ inside: hosts: localhost: - ansible_connection: local \ No newline at end of file + ansible_connection: local + diff --git a/08-ansible-02-playbook/README.md b/08-ansible-02-playbook/README.md index 7dc35db1b..480988bc8 100644 --- a/08-ansible-02-playbook/README.md +++ b/08-ansible-02-playbook/README.md @@ -9,21 +9,318 @@ ## Основная часть -1. Приготовьте свой собственный inventory файл `prod.yml`. +1. Приготовьте свой собственный inventory файл [prod.yml](https://github.com/NamorNinayzuk/mnt-homeworks/blob/MNT-video/08-ansible-02-playbook/playbook/inventory/prod.yml "жмакай") + +`prod.yml` + ```yml + --- +clickhouse: + hosts: + clickhouse-01: + ansible_host: "172.17.0.110" + ``` + + 2. Допишите playbook: нужно сделать ещё один play, который устанавливает и настраивает [vector](https://vector.dev). + +![Play_install_deb_pack_clickhouse](https://imgur.com/p8L2wjV.png) + +![Play_install_deb_pack_clickhouse](https://i.imgur.com/LyPSTf4.png) + +![Play_deinstall_deb_pack_vector](https://i.imgur.com/mbHQkO7.png) + +![Play_deinstall_deb_pack_clickhouse](https://i.imgur.com/QdtNPxo.png) + 3. При создании tasks рекомендую использовать модули: `get_url`, `template`, `unarchive`, `file`. + - Mods + +`ansible.builtin.get_url` + +`ansible.builtin.apt` + +`ansible.builtin.meta` + +`ansible.builtin.pause` + +`ansible.builtin.command` + +`ansible.builtin.file` + +`ansible.builtin.unarchive` + +`ansible.builtin.copy` + +`ansible.builtin.replace` + +`ansible.builtin.user` + +`ansible.builtin.service` + +`ansible.builtin.systemd` + 4. Tasks должны: скачать нужной версии дистрибутив, выполнить распаковку в выбранную директорию, установить vector. +All in `site.yml` + + ```yml + --- +- name: Install Clickhouse & Vector + hosts: clickhouse + gather_facts: false + + handlers: + - name: Start clickhouse service + become: true + ansible.builtin.service: + name: clickhouse-server + state: restarted + + - name: Start Vector service + become: true + ansible.builtin.systemd: + daemon_reload: true + enabled: false + name: vector.service + state: started + + tasks: + - block: + - block: + - name: Clickhouse. Get clickhouse distrib + ansible.builtin.get_url: + url: "https://packages.clickhouse.com/deb/pool/stable/{{ item }}_{{ clickhouse_version }}_all.deb" + dest: "./{{ item }}_{{ clickhouse_version }}_all.deb" + mode: 0644 + with_items: "{{ clickhouse_packages }}" + rescue: + - name: Clickhouse. Get clickhouse distrib + ansible.builtin.get_url: + url: "https://packages.clickhouse.com/deb/pool/stable/clickhouse-common-static_{{ clickhouse_version }}_amd64.deb" + dest: "./clickhouse-common-static_{{ clickhouse_version }}_amd64.deb" + mode: 0644 + with_items: "{{ clickhouse_packages }}" + + - name: Clickhouse. Install package clickhouse-common-static + become: true + ansible.builtin.apt: + deb: ./clickhouse-common-static_{{ clickhouse_version }}_amd64.deb + notify: Start clickhouse service + + - name: Clickhouse. Install package clickhouse-client + become: true + ansible.builtin.apt: + deb: ./clickhouse-client_{{ clickhouse_version }}_all.deb + notify: Start clickhouse service + + - name: Clickhouse. Install clickhouse package clickhouse-server + become: true + ansible.builtin.apt: + deb: ./clickhouse-server_{{ clickhouse_version }}_all.deb + notify: Start clickhouse service + + - name: Clickhouse. Flush handlers + ansible.builtin.meta: flush_handlers + + - name: Clickhouse. Waiting while clickhouse-server is available... + ansible.builtin.pause: + seconds: 10 + echo: false + + - name: Clickhouse. Create database + ansible.builtin.command: "clickhouse-client -q 'create database logs;'" + register: create_db + failed_when: create_db.rc != 0 and create_db.rc !=82 + changed_when: create_db.rc == 0 + tags: clickhouse + + - block: + - name: Vector. Create work directory + ansible.builtin.file: + path: "{{ vector_workdir }}" + state: directory + mode: 0755 + + - name: Vector. Get Vector distributive + ansible.builtin.get_url: + url: "https://packages.timber.io/vector/{{ vector_version }}/vector-{{ vector_version }}-{{ vector_os_arh }}-unknown-linux-gnu.tar.gz" + dest: "{{ vector_workdir }}/vector-{{ vector_version }}-{{ vector_os_arh }}-unknown-linux-gnu.tar.gz" + mode: 0644 + + - name: Vector. Unzip archive + ansible.builtin.unarchive: + remote_src: true + src: "{{ vector_workdir }}/vector-{{ vector_version }}-{{ vector_os_arh }}-unknown-linux-gnu.tar.gz" + dest: "{{ vector_workdir }}" + + - name: Vector. Install vector binary file + become: true + ansible.builtin.copy: + remote_src: true + src: "{{ vector_workdir }}/vector-{{ vector_os_arh }}-unknown-linux-gnu/bin/vector" + dest: "/usr/bin/" + mode: 0755 + owner: root + group: root + + - name: Vector. Check Vector installation + ansible.builtin.command: "vector --version" + register: var_vector + failed_when: var_vector.rc != 0 + changed_when: var_vector.rc == 0 + + - name: Vector. Create Vector config vector.toml + become: true + ansible.builtin.copy: + remote_src: true + src: "{{ vector_workdir }}/vector-{{ vector_os_arh }}-unknown-linux-gnu/config/vector.toml" + dest: "/etc/vector/" + mode: 0644 + owner: root + group: root + + - name: Vector. Create vector.service daemon + become: true + ansible.builtin.copy: + remote_src: true + src: "{{ vector_workdir }}/vector-{{ vector_os_arh }}-unknown-linux-gnu/etc/systemd/vector.service" + dest: "/lib/systemd/system/" + mode: 0644 + owner: root + group: root + notify: Start Vector service + + - name: Vector. Modify vector.service file + become: true + ansible.builtin.replace: + backup: true + path: "/lib/systemd/system/vector.service" + regexp: "^ExecStart=/usr/bin/vector$" + replace: "ExecStart=/usr/bin/vector --config /etc/vector/vector.toml" + notify: Start Vector service + + - name: Vector. Create user vector + become: true + ansible.builtin.user: + create_home: false + name: "{{ vector_os_user }}" + + - name: Vector. Create data_dir + become: true + ansible.builtin.file: + path: "/var/lib/vector" + state: directory + mode: 0755 + owner: "{{ vector_os_user }}" + + + - name: Vector. Remove work directory + ansible.builtin.file: + path: "{{ vector_workdir }}" + state: absent + + tags: vector + + ``` + 5. Запустите `ansible-lint site.yml` и исправьте ошибки, если они есть. + +![ansible-lint](https://i.imgur.com/EQQbee9.png) + 6. Попробуйте запустить playbook на этом окружении с флагом `--check`. + +![check-kek-chebureg](https://i.imgur.com/vtgI53W.png) + 7. Запустите playbook на `prod.yml` окружении с флагом `--diff`. Убедитесь, что изменения на системе произведены. + +![1](https://i.imgur.com/XwDE6A4.png) + +![2](https://i.imgur.com/hAFMIWp.png) + +![3](https://i.imgur.com/nwWK9W4.png) + +![4](https://i.imgur.com/NAGswJQ.png) + 8. Повторно запустите playbook с флагом `--diff` и убедитесь, что playbook идемпотентен. + +![1](https://i.imgur.com/ZgY4OQF.png) + +![2](https://i.imgur.com/4ZA5LsP.png) + +![3](https://i.imgur.com/C4kBZTX.png) + 9. Подготовьте README.md файл по своему playbook. В нём должно быть описано: что делает playbook, какие у него есть параметры и теги. + +[playbook/site.yml](https://github.com/NamorNinayzuk/mnt-homeworks/blob/MNT-video/08-ansible-02-playbook/playbook/site.yml ) содержит 2 блока задач: + +Первый блок объединяет последовательность задач по инсталяции Clickhouse. Блоку соответствует тэг `clickhouse`. В блоке используются параметры: + +`clickhouse_version: "22.3.3.44"` - версия Clickhouse + +`clickhouse_packages: ["clickhouse-client", "clickhouse-server", "clickhouse-common-static"]` - список пакетов для установки + +### Task's +из них: + +`TASK [Clickhouse. Get clickhouse distrib]` - скачивает deb-пакеты с дистрибутивами с помощью модуля ansible.builtin.get_url + +`TASK [Clickhouse. Install package clickhouse-common-static]` - устанавливает deb-пакет с помощью модуля ansible.builtin.apt + +`TASK [Clickhouse. Install package clickhouse-client]` - устанавливает deb-пакет с помощью модуля ansible.builtin.apt + +`TASK [Clickhouse. Install clickhouse package clickhouse-server]` - устанавливает deb-пакеты с помощью модуля ansible.builtin.apt + +`TASK [Clickhouse. Flush handlers]` - инициирует внеочередной запуск хандлера Start clickhouse service + +`RUNNING HANDLER [Start clickhouse service]` - для старта сервера clickhouse в хандлере используется модуль ansible.builtin.service + +`TASK [Clickhouse. Waiting while clickhouse-server is available...]` - устанавливает паузу в 10 секунд с помощью модуля ansible.builtin.pause, чтобы сервер Clickhouse успел запуститься. Иначе следующая задача по созданию БД может завершиться ошибкой, т.к. сервер еще не успел подняться. + +`TASK [Clickhouse. Create database]` - создает инстанс базы данных Clickhouse + + +Второй блок объединяет последовательность задач по инсталяции Vector. Блоку соответствует тэг `vector`. В блоке используются параметры: + +`vector_version: "0.21.1"` - версия Vector + +`vector_os_arh: "x86_64"` - архитектура ОС + +`vector_workdir: "/home/vector"` - рабочий каталог, в котором будут сохранены скачанные deb-пакеты + +`vector_os_user: "vector"` - имя пользователя-владельца Vector в ОС + +`vector_os_group: "vector"` - имя группы пользователя-владельца Vector в ОС + +### Task: + +`TASK [Vector. Create work directory]` - создает рабочий каталог, в котором будут сохранены скачанные deb-пакеты, с помощью модуля ansible.builtin.file + +`TASK [Vector. Get Vector distributive]` - скачивает архив с дистрибутивом с помощью модуля ansible.builtin.get_url + +`TASK [Vector. Unzip archive]` - распаковывает скачанный архив с помощью модуля ansible.builtin.unarchive + +`TASK [Vector. Install vector binary file]` - копирует исполняемый файл Vector в /usr/bin с помощью модуля ansible.builtin.copy + +`TASK [Vector. Check Vector installation]` - проверяет, что бинарный файл Vector работает корректно, с помощью модуля ansible.builtin.command + +`TASK [Vector. Create Vector config vector.toml]` - создает файл /etc/vector/vector.toml с конфигом Vector с помощью модуля ansible.builtin.copy + +`TASK [Vector. Create vector.service daemon]` - создает файл юнита systemd /lib/systemd/system/vector.service с помощью модуля ansible.builtin.copy + +`TASK [Vector. Modify vector.service file]` - редактирует файл юнита systemd /lib/systemd/system/vector.service с помощью модуля ansible.builtin.replace + +`TASK [Vector. Create user vector]` - создает пользователя ОС с помощью модуля ansible.builtin.user + +`TASK [Vector. Create data_dir]` - создает каталог дял данных Vector с помощью модуля ansible.builtin.file + +`TASK [Vector. Remove work directory]` - удаляет рабочий каталог с помощью модуля ansible.builtin.file + +`RUNNING HANDLER [Start Vector service]` - инициируется запуск хандлера Start Vector service, обновляющего конфигурацию systemd и стартующего сервис vector.service с помощью модуляansible.builtin.systemd + + 10. Готовый playbook выложите в свой репозиторий, поставьте тег `08-ansible-02-playbook` на фиксирующий коммит, в ответ предоставьте ссылку на него. +[playbook](https://github.com/NamorNinayzuk/mnt-homeworks/tree/MNT-video/08-ansible-02-playbook/playbook "жмакай") --- ### Как оформить ДЗ? Выполненное домашнее задание пришлите ссылкой на .md-файл в вашем репозитории. - ---- diff --git a/08-ansible-02-playbook/playbook/.gitignore b/08-ansible-02-playbook/playbook/.gitignore index 5ed0cb64c..6e8363b29 100644 --- a/08-ansible-02-playbook/playbook/.gitignore +++ b/08-ansible-02-playbook/playbook/.gitignore @@ -1 +1 @@ -files/* \ No newline at end of file +files/* diff --git a/08-ansible-02-playbook/playbook/group_vars/clickhouse/vars.yml b/08-ansible-02-playbook/playbook/group_vars/clickhouse/vars.yml index da987492b..4842537dc 100644 --- a/08-ansible-02-playbook/playbook/group_vars/clickhouse/vars.yml +++ b/08-ansible-02-playbook/playbook/group_vars/clickhouse/vars.yml @@ -4,3 +4,9 @@ clickhouse_packages: - clickhouse-client - clickhouse-server - clickhouse-common-static + +vector_version: "0.21.1" +vector_os_arh: "x86_64" +vector_workdir: "/home/vector" +vector_os_user: "vector" +vector_os_group: "vector" diff --git a/08-ansible-02-playbook/playbook/inventory/prod.yml b/08-ansible-02-playbook/playbook/inventory/prod.yml index 83ddf34cb..099e07b62 100644 --- a/08-ansible-02-playbook/playbook/inventory/prod.yml +++ b/08-ansible-02-playbook/playbook/inventory/prod.yml @@ -2,4 +2,4 @@ clickhouse: hosts: clickhouse-01: - ansible_host: + ansible_host: "172.17.0.110" diff --git a/08-ansible-02-playbook/playbook/site.yml b/08-ansible-02-playbook/playbook/site.yml index ab99c50b6..a763bbe45 100755 --- a/08-ansible-02-playbook/playbook/site.yml +++ b/08-ansible-02-playbook/playbook/site.yml @@ -1,36 +1,156 @@ ---- -- name: Install Clickhouse + --- +- name: Install Clickhouse & Vector hosts: clickhouse + gather_facts: false + handlers: - name: Start clickhouse service become: true ansible.builtin.service: name: clickhouse-server state: restarted + + - name: Start Vector service + become: true + ansible.builtin.systemd: + daemon_reload: true + enabled: false + name: vector.service + state: started + tasks: - block: - - name: Get clickhouse distrib - ansible.builtin.get_url: - url: "https://packages.clickhouse.com/rpm/stable/{{ item }}-{{ clickhouse_version }}.noarch.rpm" - dest: "./{{ item }}-{{ clickhouse_version }}.rpm" - with_items: "{{ clickhouse_packages }}" - rescue: - - name: Get clickhouse distrib + - block: + - name: Clickhouse. Get clickhouse distrib + ansible.builtin.get_url: + url: "https://packages.clickhouse.com/deb/pool/stable/{{ item }}_{{ clickhouse_version }}_all.deb" + dest: "./{{ item }}_{{ clickhouse_version }}_all.deb" + mode: 0644 + with_items: "{{ clickhouse_packages }}" + rescue: + - name: Clickhouse. Get clickhouse distrib + ansible.builtin.get_url: + url: "https://packages.clickhouse.com/deb/pool/stable/clickhouse-common-static_{{ clickhouse_version }}_amd64.deb" + dest: "./clickhouse-common-static_{{ clickhouse_version }}_amd64.deb" + mode: 0644 + with_items: "{{ clickhouse_packages }}" + + - name: Clickhouse. Install package clickhouse-common-static + become: true + ansible.builtin.apt: + deb: ./clickhouse-common-static_{{ clickhouse_version }}_amd64.deb + notify: Start clickhouse service + + - name: Clickhouse. Install package clickhouse-client + become: true + ansible.builtin.apt: + deb: ./clickhouse-client_{{ clickhouse_version }}_all.deb + notify: Start clickhouse service + + - name: Clickhouse. Install clickhouse package clickhouse-server + become: true + ansible.builtin.apt: + deb: ./clickhouse-server_{{ clickhouse_version }}_all.deb + notify: Start clickhouse service + + - name: Clickhouse. Flush handlers + ansible.builtin.meta: flush_handlers + + - name: Clickhouse. Waiting while clickhouse-server is available... + ansible.builtin.pause: + seconds: 10 + echo: false + + - name: Clickhouse. Create database + ansible.builtin.command: "clickhouse-client -q 'create database logs;'" + register: create_db + failed_when: create_db.rc != 0 and create_db.rc !=82 + changed_when: create_db.rc == 0 + tags: clickhouse + + - block: + - name: Vector. Create work directory + ansible.builtin.file: + path: "{{ vector_workdir }}" + state: directory + mode: 0755 + + - name: Vector. Get Vector distributive ansible.builtin.get_url: - url: "https://packages.clickhouse.com/rpm/stable/clickhouse-common-static-{{ clickhouse_version }}.x86_64.rpm" - dest: "./clickhouse-common-static-{{ clickhouse_version }}.rpm" - - name: Install clickhouse packages - become: true - ansible.builtin.yum: - name: - - clickhouse-common-static-{{ clickhouse_version }}.rpm - - clickhouse-client-{{ clickhouse_version }}.rpm - - clickhouse-server-{{ clickhouse_version }}.rpm - notify: Start clickhouse service - - name: Flush handlers - meta: flush_handlers - - name: Create database - ansible.builtin.command: "clickhouse-client -q 'create database logs;'" - register: create_db - failed_when: create_db.rc != 0 and create_db.rc !=82 - changed_when: create_db.rc == 0 + url: "https://packages.timber.io/vector/{{ vector_version }}/vector-{{ vector_version }}-{{ vector_os_arh }}-unknown-linux-gnu.tar.gz" + dest: "{{ vector_workdir }}/vector-{{ vector_version }}-{{ vector_os_arh }}-unknown-linux-gnu.tar.gz" + mode: 0644 + + - name: Vector. Unzip archive + ansible.builtin.unarchive: + remote_src: true + src: "{{ vector_workdir }}/vector-{{ vector_version }}-{{ vector_os_arh }}-unknown-linux-gnu.tar.gz" + dest: "{{ vector_workdir }}" + + - name: Vector. Install vector binary file + become: true + ansible.builtin.copy: + remote_src: true + src: "{{ vector_workdir }}/vector-{{ vector_os_arh }}-unknown-linux-gnu/bin/vector" + dest: "/usr/bin/" + mode: 0755 + owner: root + group: root + + - name: Vector. Check Vector installation + ansible.builtin.command: "vector --version" + register: var_vector + failed_when: var_vector.rc != 0 + changed_when: var_vector.rc == 0 + + - name: Vector. Create Vector config vector.toml + become: true + ansible.builtin.copy: + remote_src: true + src: "{{ vector_workdir }}/vector-{{ vector_os_arh }}-unknown-linux-gnu/config/vector.toml" + dest: "/etc/vector/" + mode: 0644 + owner: root + group: root + + - name: Vector. Create vector.service daemon + become: true + ansible.builtin.copy: + remote_src: true + src: "{{ vector_workdir }}/vector-{{ vector_os_arh }}-unknown-linux-gnu/etc/systemd/vector.service" + dest: "/lib/systemd/system/" + mode: 0644 + owner: root + group: root + notify: Start Vector service + + - name: Vector. Modify vector.service file + become: true + ansible.builtin.replace: + backup: true + path: "/lib/systemd/system/vector.service" + regexp: "^ExecStart=/usr/bin/vector$" + replace: "ExecStart=/usr/bin/vector --config /etc/vector/vector.toml" + notify: Start Vector service + + - name: Vector. Create user vector + become: true + ansible.builtin.user: + create_home: false + name: "{{ vector_os_user }}" + + - name: Vector. Create data_dir + become: true + ansible.builtin.file: + path: "/var/lib/vector" + state: directory + mode: 0755 + owner: "{{ vector_os_user }}" + + + - name: Vector. Remove work directory + ansible.builtin.file: + path: "{{ vector_workdir }}" + state: absent + + tags: vector diff --git a/08-ansible-03-yandex/README.md b/08-ansible-03-yandex/README.md index d891825c8..e1e9135bf 100644 --- a/08-ansible-03-yandex/README.md +++ b/08-ansible-03-yandex/README.md @@ -1,28 +1,133 @@ -# Домашнее задание к занятию "3. Использование Yandex Cloud" - ## Подготовка к выполнению 1. Подготовьте в Yandex Cloud три хоста: для `clickhouse`, для `vector` и для `lighthouse`. + +![Yandex Cloud](https://i.imgur.com/YDbMeOd.png) Ссылка на репозиторий LightHouse: https://github.com/VKCOM/lighthouse ## Основная часть 1. Допишите playbook: нужно сделать ещё один play, который устанавливает и настраивает lighthouse. + +```yaml +- name: Install NGINX + hosts: lighthouse + handlers: + - name: start-nginx + become: true + command: nginx + - name: reload-nginx + become: true + command: nginx -s reload + tasks: + - name: Install epel-release + become: true + ansible.builtin.yum: + name: epel-release.noarch + state: latest + - name: Install NGINX + become: true + ansible.builtin.yum: + name: nginx + state: present + notify: start-nginx + - name: Create general config NGINX + become: true + template: + src: "template/nging.conf.j2" + dest: "/etc/nginx/nginx.conf" + mode: 0755 + notify: reload-nginx + tags: install nginx +- name: Install lighthouse + handlers: + - name: start-nginx + become: true + command: nginx + - name: reload-nginx + become: true + command: nginx -s reload + hosts: lighthouse + pre_tasks: + - name: Install dependencies lighthouse + become: true + ansible.builtin.yum: + name: git + state: present + tasks: + - name: Copy from git lighthouse + git: + repo: "{{ lighthouse_vcs }}" + version: master + dest: "{{ lighthouse_location_dir }}" + - name: Create lighthouse config + become: true + template: + src: "template/lighthouse.conf.j2" + dest: "/etc/nginx/conf.d/default.conf" + mode: 0644 + notify: reload-nginx +``` + 2. При создании tasks рекомендую использовать модули: `get_url`, `template`, `yum`, `apt`. + 3. Tasks должны: скачать статику lighthouse, установить nginx или любой другой webserver, настроить его конфиг для открытия lighthouse, запустить webserver. + +```yaml +user lsd; ## Default: nobody +worker_processes 1; ## Default: 1 +error_log /var/log/nginx/error.log; +pid /var/run/nginx.pid; +events { + worker_connections 1024; ## Default: 1024 +} +http { + include /etc/nginx/mime.types; + default_type application/octet-stream; + log_format main '$remote_addr - $remote_user [$time_local] $status ' + '"$request" $body_bytes_sent "$http_referer" ' + '"$http_user_agent" "$http_x_forwarded_for"'; + access_log /var/log/nginx/access.log main; + sendfile on; + #tcp_nopush on; + + keepalive_timeout 65; + gzip on; + include /etc/nginx/conf.d/*.conf; +``` + 4. Приготовьте свой собственный inventory файл `prod.yml`. + +```yaml +--- +clickhouse: + hosts: + clickhouse: + ansible_host: 130.193.34.245 + ansible_user: admin +vector: + hosts: + vector: + ansible_host: 158.160.19.60 + ansible_user: admin +lighthouse: + hosts: + lighthouse: + ansible_host: 158.160.17.118 + ansible_user: admin + +``` + 5. Запустите `ansible-lint site.yml` и исправьте ошибки, если они есть. + 6. Попробуйте запустить playbook на этом окружении с флагом `--check`. + 7. Запустите playbook на `prod.yml` окружении с флагом `--diff`. Убедитесь, что изменения на системе произведены. + 8. Повторно запустите playbook с флагом `--diff` и убедитесь, что playbook идемпотентен. + 9. Подготовьте README.md файл по своему playbook. В нём должно быть описано: что делает playbook, какие у него есть параметры и теги. +[README.md](https://github.com/NamorNinayzuk/mnt-homeworks/tree/MNT-video/08-ansible-03-yandex/README.md) 10. Готовый playbook выложите в свой репозиторий, поставьте тег `08-ansible-03-yandex` на фиксирующий коммит, в ответ предоставьте ссылку на него. - ---- - -### Как оформить ДЗ? - -Выполненное домашнее задание пришлите ссылкой на .md-файл в вашем репозитории. - ---- +[Playbook](https://github.com/NamorNinayzuk/mnt-homeworks/tree/MNT-video/08-ansible-03-yandex/playbook) diff --git a/08-ansible-03-yandex/playbook/group_vars/clickhouse/vars.yml b/08-ansible-03-yandex/playbook/group_vars/clickhouse/vars.yml new file mode 100644 index 000000000..da987492b --- /dev/null +++ b/08-ansible-03-yandex/playbook/group_vars/clickhouse/vars.yml @@ -0,0 +1,6 @@ +--- +clickhouse_version: "22.3.3.44" +clickhouse_packages: + - clickhouse-client + - clickhouse-server + - clickhouse-common-static diff --git a/08-ansible-03-yandex/playbook/group_vars/lighthouse/vars.yaml b/08-ansible-03-yandex/playbook/group_vars/lighthouse/vars.yaml new file mode 100644 index 000000000..c458044ac --- /dev/null +++ b/08-ansible-03-yandex/playbook/group_vars/lighthouse/vars.yaml @@ -0,0 +1,3 @@ +ighthouse_vcs: https://github.com/VKCOM/lighthouse.git +lighthouse_location_dir: /home/lsd/ligthouse +lighthouse_access_log_name: lighthouse_access diff --git a/08-ansible-03-yandex/playbook/group_vars/vector/vars.yaml b/08-ansible-03-yandex/playbook/group_vars/vector/vars.yaml new file mode 100644 index 000000000..57900f57b --- /dev/null +++ b/08-ansible-03-yandex/playbook/group_vars/vector/vars.yaml @@ -0,0 +1,3 @@ +--- +vector_url: "https://packages.timber.io/vector/0.28.0/vector-0.28.0-1.x86_64.rpm" +vector_version: "0.28.0" diff --git a/08-ansible-03-yandex/playbook/inventory/prod.yml b/08-ansible-03-yandex/playbook/inventory/prod.yml new file mode 100644 index 000000000..c5275863e --- /dev/null +++ b/08-ansible-03-yandex/playbook/inventory/prod.yml @@ -0,0 +1,16 @@ +--- +clickhouse: + hosts: + clickhouse: + ansible_host: 130.193.34.245 + ansible_user: admin +vector: + hosts: + vector: + ansible_host: 158.160.19.60 + ansible_user: admin +lighthouse: + hosts: + lighthouse: + ansible_host: 158.160.17.118 + ansible_user: admin diff --git a/08-ansible-03-yandex/playbook/requirements.yml b/08-ansible-03-yandex/playbook/requirements.yml new file mode 100644 index 000000000..41e3f42eb --- /dev/null +++ b/08-ansible-03-yandex/playbook/requirements.yml @@ -0,0 +1,18 @@ +-- +- name: nginx-role + src: https://github.com/NamorNinayzuk/nginx-role + scm: git + version: 0.0.1 +- name: lighthouse-role + src: https://github.com/NamorNinayzuk/lighthouse-role + scm: git + version: 0.0.3 +- name: vector-role + src: https://github.com/NamorNinayzuk/vector-role + scm: git + version: 0.0.4 +- src: git@github.com:AlexeySetevoi/ansible-clickhouse.git +# - src: git@gitlab.com:naturalis/core/analytics/ansible-clickhouse.git + name: clickhouse + scm: git + version: "1.11.0" diff --git a/08-ansible-03-yandex/playbook/site.yml b/08-ansible-03-yandex/playbook/site.yml new file mode 100644 index 000000000..68d18b635 --- /dev/null +++ b/08-ansible-03-yandex/playbook/site.yml @@ -0,0 +1,14 @@ +--- +- name: Install nginx, lighthouse + hosts: lighthouse + roles: + - nginx-role + - lighthouse-role +- name: Install vector + hosts: vector + roles: + - vector-role +- name: Install clickhouse + hosts: clickhouse + roles: + - clickhouse diff --git a/08-ansible-03-yandex/playbook/template/lighthouse.conf.j2 b/08-ansible-03-yandex/playbook/template/lighthouse.conf.j2 new file mode 100644 index 000000000..d6bbe193c --- /dev/null +++ b/08-ansible-03-yandex/playbook/template/lighthouse.conf.j2 @@ -0,0 +1,12 @@ +server { + listen 80; + server_name localhost; + + access_log /var/log/nginx/{{ lighthouse_access_log_name }}.log main; + + location / { + root {{ lighthouse_location_dir }}; + index index.html; + } + +} diff --git a/08-ansible-03-yandex/playbook/template/nging.conf.j2 b/08-ansible-03-yandex/playbook/template/nging.conf.j2 new file mode 100644 index 000000000..fd980bf06 --- /dev/null +++ b/08-ansible-03-yandex/playbook/template/nging.conf.j2 @@ -0,0 +1,22 @@ +user lsd; ## Default: nobody +worker_processes 1; ## Default: 1 + +error_log /var/log/nginx/error.log; +pid /var/run/nginx.pid; + + +events { + worker_connections 1024; ## Default: 1024 +} + +http { + include /etc/nginx/mime.types; + default_type application/octet-stream; + + log_format main '$remote_addr - $remote_user [$time_local] $status ' + '"$request" $body_bytes_sent "$http_referer" ' + '"$http_user_agent" "$http_x_forwarded_for"'; + + access_log /var/log/nginx/access.log main; + + sendfile on; diff --git a/08-ansible-03-yandex/playbook/template/vector.cfg.j2 b/08-ansible-03-yandex/playbook/template/vector.cfg.j2 new file mode 100644 index 000000000..cf9414aaf --- /dev/null +++ b/08-ansible-03-yandex/playbook/template/vector.cfg.j2 @@ -0,0 +1,19 @@ +vector_config: + sources: + our_log: + type: file + ignore_older_secs: 600 + include: + - /home/lsd/logs/*.log + read_from: beginning + sinks: + to_clickhouse: + type: clickhouse + inputs: + - our_log + database: custom + endpoint: http://158.160.19.60:8123 + table: my_table + compression: gzip + healthcheck: false + skip_unknown_fields: true diff --git a/08-ansible-04-role/README.md b/08-ansible-04-role/README.md index 79410726c..29c27e314 100644 --- a/08-ansible-04-role/README.md +++ b/08-ansible-04-role/README.md @@ -20,17 +20,134 @@ ``` 2. При помощи `ansible-galaxy` скачать себе эту роль. + +Загрузка ролей выполняется командой: `ansible-galaxy install -r <файл>`, где `<файл>` - YAML файл с информацией о требуемых компонентах (ролях), включая информацию о способе их получения. + +Если используемая роль уже загружена или её нужно обновить, то необходимо добавить ключ `--force` +```console +┌──(kali㉿kali)-[~/devops-netology8.4] +└─$ ansible-galaxy install -r requirements.yml +Starting galaxy role install process +- extracting clickhouse to /home/kali/.ansible/roles/clickhouse +- clickhouse (1.11.0) was installed successfully +┌──(kali㉿kali)-[~/devops-netology8.4] +└─$ +``` 3. Создать новый каталог с ролью при помощи `ansible-galaxy role init vector-role`. -4. На основе tasks из старого playbook заполните новую role. Разнесите переменные между `vars` и `default`. -5. Перенести нужные шаблоны конфигов в `templates`. -6. Описать в `README.md` обе роли и их параметры. -7. Повторите шаги 3-6 для lighthouse. Помните, что одна роль должна настраивать один продукт. -8. Выложите все roles в репозитории. Проставьте тэги, используя семантическую нумерацию Добавьте roles в `requirements.yml` в playbook. -9. Переработайте playbook на использование roles. Не забудьте про зависимости lighthouse и возможности совмещения `roles` с `tasks`. -10. Выложите playbook в репозиторий. -11. В ответ приведите ссылки на оба репозитория с roles и одну ссылку на репозиторий с playbook. +Для упрощения подготовки роли можно воспользоваться командой `ansible-galaxy role init <роль>`, где `<роль>` - имя инициализируемой роли. +Данная команда создаст шаблон новой роли, а именно - набор каталогов и предзаполненных файлов внутри директории `<роль>`. + +Пример вывода команды: +```console +┌──(kali㉿kali)-[~/devops-netology8.4] +└─$ ansible-galaxy role init vector-role +- Role vector-role was created successfully +┌──(kali㉿kali)-[~/devops-netology8.4] +└─$ ansible-galaxy role init lighthouse-role +- Role lighthouse-role was created successfully + +``` + +4. На основе tasks из старого playbook заполните новую role. Разнесите переменные между `vars` и `default`. + +В структуре каталогов роли переменные делятся на две группы: `defaults` и `vars`. +В **defaults** хранятся переменные и их значения по умолчанию, которые пользователь может переопределить на любом уровне (**group vars**, **host vars** и т.п.) +В **vars** хранятся переменные и их значения, которые обычно не предназначены для переопределения пользователем, а используются для упрощения дальнейшей разработки роли. +Исключение [--extra-vars](https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html), которые при передаче в командной строке `ansible-playbook` могут их переопределить. + + +# Playbook для установки Clickhouse, Vector и LightHouse +## Описание + +Репозиторий файлов [clickhouse](https://github.com/NamorNinayzuk/ansible-clickhouse) + +Репозиторий роли для [Vector](https://github.com/NamorNinayzuk/vector-role) + +Репозиторий роли для [Lighthouse](https://github.com/NamorNinayzuk/lighthouse-role) + +### Verctor-role +========= +Роль устанавливает ПО Vector. + +Предварительные требования +------------ + +Предполагается что роль будет выполняться на deb-based дистрибутивах Linux (Debian, Ubuntu). + +Переменные роли +-------------- + +В [переменных по умолчанию](https://github.com/NamorNinayzuk/vector-role/blob/main/defaults/main.yml) указана версию ПО Vector 0.25.0, в случае необходимости её нужно изменить. + +Зависимости +------------ + +Нет зависимостей + +### Lighthouse-role +========= + +Роль устанавливает ПО Nginx и Lighthouse. + +Предварительные требования +------------ + +Предполагается что роль будет выполняться на deb-based дистрибутивах Linux (Debian, Ubuntu). +Предполагается что на машине уже будет установлена утилита unzip. + +Переменные роли +-------------- + +В [переменных по умолчанию](https://github.com/NamorNinayzuk/lighthouse-role/blob/main/defaults/main.yml) указана версия ПО Nginx 1.18.0, в случае необходимости её нужно изменить. +В [изменяемых переменных](https://github.com/NamorNinayzuk/lighthouse-role/blob/main/vars/main.yml) указан порт по которому будет слушать ПО Lighthouse. По умолчанию это порт 80, в случае необходимости его можно изменить. + +Зависимости +------------ + +Нет зависимостей + +Пример плейбука с использованием роли +---------------- +Пример для использования на машинах на которых уже установлена утилита Unzip: +``` +- name: Install Lighthouse + hosts: lighthouse + roles: + - lighthouse-role +``` + +Пример с установкой утилиты Unzip в рамках pre_tasks в плее использующем данную роль: + +```yaml +- name: Install Lighthouse + hosts: lighthouse + pre_tasks: + - name: install unzip + become: true + ansible.builtin.apt: + name: unzip + state: present + update_cache: yes + roles: + - lighthouse-role +``` --- +## Порядок выполнения плейбука +1 плей: Запускает роль по установке сlickhouse на всех хостах указанных в группе clickhouse в inventory файле. +2 плей: Запускает роль по установке vector на всех хостах указанных в группе vector в inventory файле. +3 плей: Запускает роль по установке nginx и lighthouse на всех хостах указанных в группе lighthouse в inventory файле. + +## inventory файл +Перед выполнением плейбука необходимо указать хосты в соответствующие группы в inventory файле. +Inventory файл содержит 3 группы хостов: +1. clickhouse +2. vector +3. lighthouse + +На хостах, указанных в группе будет выполнена роль соответствующая имени группы. + + ### Как оформить ДЗ? diff --git a/08-ansible-04-role/inventory/prod.yml b/08-ansible-04-role/inventory/prod.yml new file mode 100644 index 000000000..c5275863e --- /dev/null +++ b/08-ansible-04-role/inventory/prod.yml @@ -0,0 +1,16 @@ +--- +clickhouse: + hosts: + clickhouse: + ansible_host: 130.193.34.245 + ansible_user: admin +vector: + hosts: + vector: + ansible_host: 158.160.19.60 + ansible_user: admin +lighthouse: + hosts: + lighthouse: + ansible_host: 158.160.17.118 + ansible_user: admin diff --git a/08-ansible-04-role/playbook/requirements.yml b/08-ansible-04-role/playbook/requirements.yml new file mode 100644 index 000000000..9cdca71fa --- /dev/null +++ b/08-ansible-04-role/playbook/requirements.yml @@ -0,0 +1,14 @@ +--- +- name: lighthouse-role + src: https://github.com/NamorNinayzuk/lighthouse-role + scm: git + version: 1.0.0 +- name: vector-role + src: https://github.com/NamorNinayzuk/vector-role + scm: git + version: 1.0.0 +- src: git@github.com:AlexeySetevoi/ansible-clickhouse.git + name: clickhouse + scm: git + version: "1.11.0" +--- diff --git a/08-ansible-04-role/playbook/site.yml b/08-ansible-04-role/playbook/site.yml new file mode 100644 index 000000000..0b7e45d3d --- /dev/null +++ b/08-ansible-04-role/playbook/site.yml @@ -0,0 +1,22 @@ +--- +- name: Install Clickhouse + hosts: clickhouse + roles: + - clickhouse + +- name: Install Vector + hosts: vector + roles: + - vector-role + +- name: Install Lighthouse + hosts: lighthouse + pre_tasks: + - name: install unzip + become: true + ansible.builtin.apt: + name: unzip + state: present + update_cache: yes + roles: + - lighthouse-role diff --git a/08-ansible-05-testing/README.md b/08-ansible-05-testing/README.md index 85b9c005e..784cac162 100644 --- a/08-ansible-05-testing/README.md +++ b/08-ansible-05-testing/README.md @@ -1,8 +1,8 @@ # Домашнее задание к занятию "5. Тестирование roles" ## Подготовка к выполнению -1. Установите molecule: `pip3 install "molecule==3.5.2"` -2. Выполните `docker pull aragast/netology:latest` - это образ с podman, tox и несколькими пайтонами (3.7 и 3.9) внутри +> 1. Установите molecule: `pip3 install "molecule==3.5.2"` +> 2. Выполните `docker pull aragast/netology:latest` - это образ с podman, tox и несколькими пайтонами (3.7 и 3.9) внутри ## Основная часть @@ -10,23 +10,3702 @@ ### Molecule -1. Запустите `molecule test -s centos7` внутри корневой директории clickhouse-role, посмотрите на вывод команды. -2. Перейдите в каталог с ролью vector-role и создайте сценарий тестирования по умолчанию при помощи `molecule init scenario --driver-name docker`. -3. Добавьте несколько разных дистрибутивов (centos:8, ubuntu:latest) для инстансов и протестируйте роль, исправьте найденные ошибки, если они есть. -4. Добавьте несколько assert'ов в verify.yml файл для проверки работоспособности vector-role (проверка, что конфиг валидный, проверка успешности запуска, etc). Запустите тестирование роли повторно и проверьте, что оно прошло успешно. -5. Добавьте новый тег на коммит с рабочим сценарием в соответствии с семантическим версионированием. +### 1. Запустите `molecule test -s centos_7` внутри корневой директории clickhouse-role, посмотрите на вывод команды. -### Tox +Используемое окружение: +![system schema](https://i.imgur.com/AdLOUeg.png) -1. Добавьте в директорию с vector-role файлы из [директории](./example) -2. Запустите `docker run --privileged=True -v :/opt/vector-role -w /opt/vector-role -it aragast/netology:latest /bin/bash`, где path_to_repo - путь до корня репозитория с vector-role на вашей файловой системе. -3. Внутри контейнера выполните команду `tox`, посмотрите на вывод. -5. Создайте облегчённый сценарий для `molecule` с драйвером `molecule_podman`. Проверьте его на исполнимость. -6. Пропишите правильную команду в `tox.ini` для того чтобы запускался облегчённый сценарий. -8. Запустите команду `tox`. Убедитесь, что всё отработало успешно. -9. Добавьте новый тег на коммит с рабочим сценарием в соответствии с семантическим версионированием. -После выполнения у вас должно получится два сценария molecule и один tox.ini файл в репозитории. Не забудьте указать в ответе теги решений Tox и Molecule заданий. В качестве решения пришлите ссылку на ваш репозиторий и скриншоты этапов выполнения задания. +Для работы потребуется наличие пакета **flake8** (установка, например для **apt**: `sudo apt install flake8`). +А также существует несовместимость контейнеризации если установлен **Podman** в качестве хоста **Docker**, которую нужно учесть. + +Разбор вывода теста molecule... Часть лога: + +```console +┌──(kali㉿kali)-[~/08-ansible-05/clickhouse] +└─$ molecule test -s centos_7 +INFO centos_7 scenario test matrix: dependency, lint, cleanup, destroy, syntax, create, prepare, converge, idempotence, side_effect, verify, cleanup, destroy +INFO Performing prerun... +INFO Set ANSIBLE_LIBRARY=/home/.cache/ansible-compat/b9a93c/modules:/home/.ansible/plugins/modules:/usr/share/ansible/plugins/modules +INFO Set ANSIBLE_COLLECTIONS_PATH=/home/.cache/ansible-compat/b9a93c/../home/.ansible:/usr/share/ansible +INFO Set ANSIBLE_ROLES_PATH=/home/.cache/ansible-compat/b9a93c/roles:/home/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles +INFO Inventory /home/08-ansible-05/ansible-clickhouse/molecule/centos_7/../resources/inventory/hosts.yml linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/hosts +INFO Inventory /home/08-ansible-05/ansible-clickhouse/molecule/centos_7/../resources/inventory/group_vars/ linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/group_vars +INFO Inventory /home/08-ansible-05/ansible-clickhouse/molecule/centos_7/../resources/inventory/host_vars/ linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/host_vars +INFO Running centos_7 > dependency +WARNING Skipping, missing the requirements file. +WARNING Skipping, missing the requirements file. +INFO Inventory /home/08-ansible-05/ansible-clickhouse/molecule/centos_7/../resources/inventory/hosts.yml linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/hosts +INFO Inventory /home/08-ansible-05/ansible-clickhouse/molecule/centos_7/../resources/inventory/group_vars/ linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/group_vars +INFO Inventory /home/08-ansible-05/ansible-clickhouse/molecule/centos_7/../resources/inventory/host_vars/ linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/host_vars +INFO Running centos_7 > lint +COMMAND: yamllint . +ansible-lint +flake8 + +WARNING Listing 56 violation(s) that are fatal +fqcn-builtins: Use FQCN for builtin actions. +handlers/main.yml:3 Task/Handler: Restart Clickhouse Service + +schema: {'name': 'EL', 'versions': [7, 8]} is not valid under any of the given schemas (schema[meta]) +meta/main.yml:1 Returned errors will not include exact line numbers, but they will mention +the schema name being used as a tag, like ``playbook-schema``, +``tasks-schema``. + +This rule is not skippable and stops further processing of the file. + +Schema bugs should be reported towards (https://github.com/ansible/schemas) project instead of ansible-lint. + +If incorrect schema was picked, you might want to either: + +* move the file to standard location, so its file is detected correctly. +* use ``kinds:`` option in linter config to help it pick correct file type. + + +fqcn-builtins: Use FQCN for builtin actions. +molecule/centos_7/converge.yml:5 Task/Handler: Include ansible-clickhouse + +fqcn-builtins: Use FQCN for builtin actions. +molecule/centos_7/verify.yml:8 Task/Handler: Example assertion + +fqcn-builtins: Use FQCN for builtin actions. +molecule/centos_8/converge.yml:5 Task/Handler: Include ansible-clickhouse + +fqcn-builtins: Use FQCN for builtin actions. +molecule/centos_8/verify.yml:8 Task/Handler: Example assertion + +schema: None is not of type 'object' (schema[inventory]) +molecule/resources/inventory/hosts.yml:1 Returned errors will not include exact line numbers, but they will mention +the schema name being used as a tag, like ``playbook-schema``, +``tasks-schema``. + +This rule is not skippable and stops further processing of the file. + +Schema bugs should be reported towards (https://github.com/ansible/schemas) project instead of ansible-lint. + +If incorrect schema was picked, you might want to either: + +* move the file to standard location, so its file is detected correctly. +* use ``kinds:`` option in linter config to help it pick correct file type. + + +fqcn-builtins: Use FQCN for builtin actions. +molecule/resources/playbooks/converge.yml:5 Task/Handler: Apply Clickhouse Role + +fqcn-builtins: Use FQCN for builtin actions. +molecule/ubuntu_focal/converge.yml:5 Task/Handler: Include ansible-clickhouse + +fqcn-builtins: Use FQCN for builtin actions. +molecule/ubuntu_focal/verify.yml:8 Task/Handler: Example assertion + +fqcn-builtins: Use FQCN for builtin actions. +tasks/configure/db.yml:2 Task/Handler: Set ClickHose Connection String + +fqcn-builtins: Use FQCN for builtin actions. +tasks/configure/db.yml:5 Task/Handler: Gather list of existing databases + +fqcn-builtins: Use FQCN for builtin actions. +tasks/configure/db.yml:11 Task/Handler: Config | Delete database config + +fqcn-builtins: Use FQCN for builtin actions. +tasks/configure/db.yml:20 Task/Handler: Config | Create database config + +fqcn-builtins: Use FQCN for builtin actions. +tasks/configure/dict.yml:2 Task/Handler: Config | Generate dictionary config + +fqcn-builtins: Use FQCN for builtin actions. +tasks/configure/sys.yml:2 Task/Handler: Check clickhouse config, data and logs + +fqcn-builtins: Use FQCN for builtin actions. +tasks/configure/sys.yml:17 Task/Handler: Config | Create config.d folder + +fqcn-builtins: Use FQCN for builtin actions. +tasks/configure/sys.yml:26 Task/Handler: Config | Create users.d folder + +fqcn-builtins: Use FQCN for builtin actions. +tasks/configure/sys.yml:35 Task/Handler: Config | Generate system config + +fqcn-builtins: Use FQCN for builtin actions. +tasks/configure/sys.yml:45 Task/Handler: Config | Generate users config + +fqcn-builtins: Use FQCN for builtin actions. +tasks/configure/sys.yml:54 Task/Handler: Config | Generate remote_servers config + +fqcn-builtins: Use FQCN for builtin actions. +tasks/configure/sys.yml:65 Task/Handler: Config | Generate macros config + +fqcn-builtins: Use FQCN for builtin actions. +tasks/configure/sys.yml:76 Task/Handler: Config | Generate zookeeper servers config + +fqcn-builtins: Use FQCN for builtin actions. +tasks/configure/sys.yml:87 Task/Handler: Config | Fix interserver_http_port and intersever_https_port collision + +fqcn-builtins: Use FQCN for builtin actions. +tasks/install/apt.yml:5 Task/Handler: Install by APT | Apt-key add repo key + +fqcn-builtins: Use FQCN for builtin actions. +tasks/install/apt.yml:12 Task/Handler: Install by APT | Remove old repo + +fqcn-builtins: Use FQCN for builtin actions. +tasks/install/apt.yml:20 Task/Handler: Install by APT | Repo installation + +fqcn-builtins: Use FQCN for builtin actions. +tasks/install/apt.yml:27 Task/Handler: Install by APT | Package installation + +fqcn-builtins: Use FQCN for builtin actions. +tasks/install/apt.yml:36 Task/Handler: Install by APT | Package installation + +fqcn-builtins: Use FQCN for builtin actions. +tasks/install/apt.yml:45 Task/Handler: Hold specified version during APT upgrade | Package installation + +risky-file-permissions: File permissions unset or incorrect. +tasks/install/apt.yml:45 Task/Handler: Hold specified version during APT upgrade | Package installation + +fqcn-builtins: Use FQCN for builtin actions. +tasks/install/dnf.yml:5 Task/Handler: Install by YUM | Ensure clickhouse repo GPG key imported + +fqcn-builtins: Use FQCN for builtin actions. +tasks/install/dnf.yml:12 Task/Handler: Install by YUM | Ensure clickhouse repo installed + +fqcn-builtins: Use FQCN for builtin actions. +tasks/install/dnf.yml:24 Task/Handler: Install by YUM | Ensure clickhouse package installed (latest) + +fqcn-builtins: Use FQCN for builtin actions. +tasks/install/dnf.yml:32 Task/Handler: Install by YUM | Ensure clickhouse package installed (version {{ clickhouse_version }}) + +fqcn-builtins: Use FQCN for builtin actions. +tasks/install/yum.yml:5 Task/Handler: Install by YUM | Ensure clickhouse repo installed + +fqcn-builtins: Use FQCN for builtin actions. +tasks/install/yum.yml:16 Task/Handler: Install by YUM | Ensure clickhouse package installed (latest) + +fqcn-builtins: Use FQCN for builtin actions. +tasks/install/yum.yml:24 Task/Handler: Install by YUM | Ensure clickhouse package installed (version {{ clickhouse_version }}) + +fqcn-builtins: Use FQCN for builtin actions. +tasks/main.yml:3 Task/Handler: Include OS Family Specific Variables + +fqcn-builtins: Use FQCN for builtin actions. +tasks/main.yml:39 Task/Handler: Notify Handlers Now + +fqcn-builtins: Use FQCN for builtin actions. +tasks/main.yml:45 Task/Handler: Wait for Clickhouse Server to Become Ready + +fqcn-builtins: Use FQCN for builtin actions. +tasks/params.yml:3 Task/Handler: Set clickhouse_service_enable + +fqcn-builtins: Use FQCN for builtin actions. +tasks/params.yml:7 Task/Handler: Set clickhouse_service_ensure + +fqcn-builtins: Use FQCN for builtin actions. +tasks/precheck.yml:1 Task/Handler: Requirements check | Checking sse4_2 support + +fqcn-builtins: Use FQCN for builtin actions. +tasks/precheck.yml:5 Task/Handler: Requirements check | Not supported distribution && release + +fqcn-builtins: Use FQCN for builtin actions. +tasks/remove.yml:3 Task/Handler: Remove clickhouse config,data and logs + +fqcn-builtins: Use FQCN for builtin actions. +tasks/remove/apt.yml:5 Task/Handler: Uninstall by APT | Package uninstallation + +fqcn-builtins: Use FQCN for builtin actions. +tasks/remove/apt.yml:12 Task/Handler: Uninstall by APT | Repo uninstallation + +fqcn-builtins: Use FQCN for builtin actions. +tasks/remove/apt.yml:18 Task/Handler: Uninstall by APT | Apt-key remove repo key + +fqcn-builtins: Use FQCN for builtin actions. +tasks/remove/dnf.yml:5 Task/Handler: Uninstall by YUM | Ensure clickhouse package uninstalled + +fqcn-builtins: Use FQCN for builtin actions. +tasks/remove/dnf.yml:12 Task/Handler: Uninstall by YUM | Ensure clickhouse repo uninstalled + +fqcn-builtins: Use FQCN for builtin actions. +tasks/remove/dnf.yml:19 Task/Handler: Uninstall by YUM | Ensure clickhouse key uninstalled + +fqcn-builtins: Use FQCN for builtin actions. +tasks/remove/yum.yml:5 Task/Handler: Uninstall by YUM | Ensure clickhouse package uninstalled + +fqcn-builtins: Use FQCN for builtin actions. +tasks/remove/yum.yml:12 Task/Handler: Uninstall by YUM | Ensure clickhouse repo uninstalled + +fqcn-builtins: Use FQCN for builtin actions. +tasks/service.yml:3 Task/Handler: Ensure {{ clickhouse_service }} is enabled: {{ clickhouse_service_enable }} and state: {{ clickhouse_service_ensure }} + +var-spacing: Jinja2 variables and filters should have spaces before and after. +vars/debian.yml:4 .clickhouse_repo_old + +You can skip specific rules or tags by adding them to your configuration file: +# .config/ansible-lint.yml +warn_list: # or 'skip_list' to silence them completely + - experimental # all rules tagged as experimental + - fqcn-builtins # Use FQCN for builtin actions. + - var-spacing # Jinja2 variables and filters should have spaces before and after. + +Finished with 53 failure(s), 3 warning(s) on 56 files. +INFO Inventory /home/08-ansible-05/ansible-clickhouse/molecule/centos_7/../resources/inventory/hosts.yml linked to + /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/hosts +INFO Inventory /home/08-ansible-05/ansible-clickhouse/molecule/centos_7/../resources/inventory/group_vars/ linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/group_vars +INFO Inventory /home/08-ansible-05/ansible-clickhouse/molecule/centos_7/../resources/inventory/host_vars/ linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/host_vars +INFO Running centos_7 > cleanup +WARNING Skipping, cleanup playbook not configured. +INFO Inventory /home/08-ansible-05/ansible-clickhouse/molecule/centos_7/../resources/inventory/hosts.yml linked to + /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/hosts +INFO Inventory /home/08-ansible-05/ansible-clickhouse/molecule/centos_7/../resources/inventory/group_vars/ linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/group_vars +INFO Inventory /home/08-ansible-05/ansible-clickhouse/molecule/centos_7/../resources/inventory/host_vars/ linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/host_vars +INFO Running centos_7 > destroy +INFO Sanity checks: 'docker' + +PLAY [Destroy] ***************************************************************** + +TASK [Destroy molecule instance(s)] ******************************************** +changed: [localhost] => (item=centos_7) + +TASK [Wait for instance(s) deletion to complete] ******************************* +ok: [localhost] => (item=centos_7) + +TASK [Delete docker networks(s)] *********************************************** + +PLAY RECAP ********************************************************************* +localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 + +INFO Inventory /home/08-ansible-05/ansible-clickhouse/molecule/centos_7/../resources/inventory/hosts.yml linked to + /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/hosts +INFO Inventory /home/08-ansible-05/ansible-clickhouse/molecule/centos_7/../resources/inventory/group_vars/ linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/group_vars +INFO Inventory /home/08-ansible-05/ansible-clickhouse/molecule/centos_7/../resources/inventory/host_vars/ linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/host_vars +INFO Running centos_7 > syntax + +playbook: /home/08-ansible-05/ansible-clickhouse/molecule/resources/playbooks/converge.yml +INFO Inventory /home/08-ansible-05/ansible-clickhouse/molecule/centos_7/../resources/inventory/hosts.yml linked to + /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/hosts +INFO Inventory /home/08-ansible-05/ansible-clickhouse/molecule/centos_7/../resources/inventory/group_vars/ linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/group_vars +INFO Inventory /home/08-ansible-05/ansible-clickhouse/molecule/centos_7/../resources/inventory/host_vars/ linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/host_vars +INFO Running centos_7 > create + +PLAY [Create] ****************************************************************** + +TASK [Log into a Docker registry] ********************************************** +skipping: [localhost] => (item=None) +skipping: [localhost] + +TASK [Check presence of custom Dockerfiles] ************************************ +ok: [localhost] => (item={'capabilities': ['SYS_ADMIN'], 'command': '/usr/sbin/init', 'dockerfile': '../resources/Dockerfile.j2', 'env': {'ANSIBLE_USER': 'ansible', 'DEPLOY_GROUP': 'deployer', 'SUDO_GROUP': 'wheel', 'container': 'docker'}, 'image': 'centos:7', 'name': 'centos_7', 'privileged': True, 'tmpfs': ['/run', '/tmp'], 'volumes': ['/sys/fs/cgroup:/sys/fs/cgroup']}) + +TASK [Create Dockerfiles from image names] ************************************* +changed: [localhost] => (item={'capabilities': ['SYS_ADMIN'], 'command': '/usr/sbin/init', 'dockerfile': '../resources/Dockerfile.j2', 'env': {'ANSIBLE_USER': 'ansible', 'DEPLOY_GROUP': 'deployer', 'SUDO_GROUP': 'wheel', 'container': 'docker'}, 'image': 'centos:7', 'name': 'centos_7', 'privileged': True, 'tmpfs': ['/run', '/tmp'], 'volumes': ['/sys/fs/cgroup:/sys/fs/cgroup']}) + +TASK [Discover local Docker images] ******************************************** +ok: [localhost] => (item={'diff': [], 'dest': '/home/.cache/molecule/ansible-clickhouse/centos_7/Dockerfile_centos_7', 'src': '/home/.ansible/tmp/ansible-tmp-1659784396.3046184-3873-95148761793267/source', 'md5sum': 'e90d08cd34f49a5f8a41a07de1348618', 'checksum': '4b70768619482424811f2977aa277a5acf2b13a1', 'changed': True, 'uid': 1000, 'gid': 1000, 'owner': 'sa', 'group': 'sa', 'mode': '0600', 'state': 'file', 'size': 2199, 'invocation': {'module_args': {'src': '/home/.ansible/tmp/ansible-tmp-1659784396.3046184-3873-95148761793267/source', 'dest': '/home/.cache/molecule/ansible-clickhouse/centos_7/Dockerfile_centos_7', 'mode': '0600', 'follow': False, '_original_basename': 'Dockerfile.j2', 'checksum': '4b70768619482424811f2977aa277a5acf2b13a1', 'backup': False, 'force': True, 'unsafe_writes': False, 'content': None, 'validate': None, 'directory_mode': None, 'remote_src': None, 'local_follow': None, 'owner': None, 'group': None, 'seuser': None, 'serole': None, 'selevel': None, 'setype': None, 'attributes': None}}, 'failed': False, 'item': {'capabilities': ['SYS_ADMIN'], 'command': '/usr/sbin/init', 'dockerfile': '../resources/Dockerfile.j2', 'env': {'ANSIBLE_USER': 'ansible', 'DEPLOY_GROUP': 'deployer', 'SUDO_GROUP': 'wheel', 'container': 'docker'}, 'image': 'centos:7', 'name': 'centos_7', 'privileged': True, 'tmpfs': ['/run', '/tmp'], 'volumes': ['/sys/fs/cgroup:/sys/fs/cgroup']}, 'ansible_loop_var': 'item', 'i': 0, 'ansible_index_var': 'i'}) + +TASK [Build an Ansible compatible image (new)] ********************************* +changed: [localhost] => (item=molecule_local/centos:7) + +TASK [Create docker network(s)] ************************************************ + +TASK [Determine the CMD directives] ******************************************** +ok: [localhost] => (item={'capabilities': ['SYS_ADMIN'], 'command': '/usr/sbin/init', 'dockerfile': '../resources/Dockerfile.j2', 'env': {'ANSIBLE_USER': 'ansible', 'DEPLOY_GROUP': 'deployer', 'SUDO_GROUP': 'wheel', 'container': 'docker'}, 'image': 'centos:7', 'name': 'centos_7', 'privileged': True, 'tmpfs': ['/run', '/tmp'], 'volumes': ['/sys/fs/cgroup:/sys/fs/cgroup']}) + +TASK [Create molecule instance(s)] ********************************************* +changed: [localhost] => (item=centos_7) + +TASK [Wait for instance(s) creation to complete] ******************************* +FAILED - RETRYING: [localhost]: Wait for instance(s) creation to complete (300 retries left). +changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': '478592296224.4738', 'results_file': '/home/.ansible_async/478592296224.4738', 'changed': True, 'item': {'capabilities': ['SYS_ADMIN'], 'command': '/usr/sbin/init', 'dockerfile': '../resources/Dockerfile.j2', 'env': {'ANSIBLE_USER': 'ansible', 'DEPLOY_GROUP': 'deployer', 'SUDO_GROUP': 'wheel', 'container': 'docker'}, 'image': 'centos:7', 'name': 'centos_7', 'privileged': True, 'tmpfs': ['/run', '/tmp'], 'volumes': ['/sys/fs/cgroup:/sys/fs/cgroup']}, 'ansible_loop_var': 'item'}) + +PLAY RECAP ********************************************************************* +localhost : ok=7 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 + +INFO Inventory /home/08-ansible-05/ansible-clickhouse/molecule/centos_7/../resources/inventory/hosts.yml linked to + /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/hosts +INFO Inventory /home/08-ansible-05/ansible-clickhouse/molecule/centos_7/../resources/inventory/group_vars/ linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/group_vars +INFO Inventory /home/08-ansible-05/ansible-clickhouse/molecule/centos_7/../resources/inventory/host_vars/ linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/host_vars +INFO Running centos_7 > prepare +WARNING Skipping, prepare playbook not configured. +INFO Inventory /home/08-ansible-05/ansible-clickhouse/molecule/centos_7/../resources/inventory/hosts.yml linked to + /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/hosts +INFO Inventory /home/08-ansible-05/ansible-clickhouse/molecule/centos_7/../resources/inventory/group_vars/ linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/group_vars +INFO Inventory /home/08-ansible-05/ansible-clickhouse/molecule/centos_7/../resources/inventory/host_vars/ linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/host_vars +INFO Running centos_7 > converge + +PLAY [Converge] **************************************************************** + +TASK [Gathering Facts] ********************************************************* +ok: [centos_7] + +TASK [Apply Clickhouse Role] *************************************************** + +TASK [ansible-clickhouse : Include OS Family Specific Variables] *************** +ok: [centos_7] + +TASK [ansible-clickhouse : include_tasks] ************************************** +included: /home/08-ansible-05/ansible-clickhouse/tasks/precheck.yml for centos_7 + +TASK [ansible-clickhouse : Requirements check | Checking sse4_2 support] ******* +ok: [centos_7] + +TASK [ansible-clickhouse : Requirements check | Not supported distribution && release] *** +skipping: [centos_7] + +TASK [ansible-clickhouse : include_tasks] ************************************** +included: /home/08-ansible-05/ansible-clickhouse/tasks/params.yml for centos_7 + +TASK [ansible-clickhouse : Set clickhouse_service_enable] ********************** +ok: [centos_7] + +TASK [ansible-clickhouse : Set clickhouse_service_ensure] ********************** +ok: [centos_7] + +TASK [ansible-clickhouse : include_tasks] ************************************** +included: /home/08-ansible-05/ansible-clickhouse/tasks/install/yum.yml for centos_7 + +TASK [ansible-clickhouse : Install by YUM | Ensure clickhouse repo installed] *** +--- before: /etc/yum.repos.d/clickhouse.repo ++++ after: /etc/yum.repos.d/clickhouse.repo +@@ -0,0 +1,7 @@ ++[clickhouse] ++async = 1 ++baseurl = https://packages.clickhouse.com/rpm/stable/ ++enabled = 1 ++gpgcheck = 0 ++name = Clickhouse repo ++ + +changed: [centos_7] + +TASK [ansible-clickhouse : Install by YUM | Ensure clickhouse package installed (latest)] *** +changed: [centos_7] + +TASK [ansible-clickhouse : Install by YUM | Ensure clickhouse package installed (version latest)] *** +skipping: [centos_7] + +TASK [ansible-clickhouse : include_tasks] ************************************** +included: /home/08-ansible-05/ansible-clickhouse/tasks/configure/sys.yml for centos_7 + +TASK [ansible-clickhouse : Check clickhouse config, data and logs] ************* +ok: [centos_7] => (item=/var/log/clickhouse-server) +--- before ++++ after +@@ -1,4 +1,4 @@ + { +- "mode": "0700", ++ "mode": "0770", + "path": "/etc/clickhouse-server" + } + +changed: [centos_7] => (item=/etc/clickhouse-server) +--- before ++++ after +@@ -1,7 +1,7 @@ + { +- "group": 0, +- "mode": "0755", +- "owner": 0, ++ "group": 996, ++ "mode": "0770", ++ "owner": 999, + "path": "/var/lib/clickhouse/tmp/", +- "state": "absent" ++ "state": "directory" + } + +changed: [centos_7] => (item=/var/lib/clickhouse/tmp/) +--- before ++++ after +@@ -1,4 +1,4 @@ + { +- "mode": "0700", ++ "mode": "0770", + "path": "/var/lib/clickhouse/" + } + +changed: [centos_7] => (item=/var/lib/clickhouse/) + +TASK [ansible-clickhouse : Config | Create config.d folder] ******************** +--- before ++++ after +@@ -1,4 +1,4 @@ + { +- "mode": "0500", ++ "mode": "0770", + "path": "/etc/clickhouse-server/config.d" + } + +changed: [centos_7] + +TASK [ansible-clickhouse : Config | Create users.d folder] ********************* +--- before ++++ after +@@ -1,4 +1,4 @@ + { +- "mode": "0500", ++ "mode": "0770", + "path": "/etc/clickhouse-server/users.d" + } + +changed: [centos_7] + +TASK [ansible-clickhouse : Config | Generate system config] ******************** +--- before ++++ after: /home/.ansible/tmp/ansible-local-4875ocmz5_ty/tmpdefjsh_1/config.j2 +@@ -0,0 +1,382 @@ ++ ++ ++ ++ ++ ++ trace ++ /var/log/clickhouse-server/clickhouse-server.log ++ /var/log/clickhouse-server/clickhouse-server.err.log ++ 1000M ++ 10 ++ ++ ++ 8123 ++ ++ 9000 ++ ++ ++ ++ ++ ++ /etc/clickhouse-server/server.crt ++ /etc/clickhouse-server/server.key ++ ++ /etc/clickhouse-server/dhparam.pem ++ none ++ true ++ true ++ sslv2,sslv3 ++ true ++ ++ ++ ++ true ++ true ++ sslv2,sslv3 ++ true ++ ++ ++ ++ RejectCertificateHandler ++ ++ ++ ++ ++ ++ ++ ++ ++ 9009 ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ 127.0.0.1 ++ ++ 2048 ++ 3 ++ ++ ++ 100 ++ ++ ++ ++ ++ ++ 8589934592 ++ ++ ++ 5368709120 ++ ++ ++ ++ /var/lib/clickhouse/ ++ ++ ++ /var/lib/clickhouse/tmp/ ++ ++ ++ /var/lib/clickhouse/user_files/ ++ ++ ++ users.xml ++ ++ ++ default ++ ++ ++ ++ ++ ++ default ++ ++ ++ ++ ++ ++ ++ ++ ++ False ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ 3600 ++ ++ ++ ++ True ++ ++ ++ 3600 ++ ++ ++ 60 ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ system ++ query_log
++ ++ toYYYYMM(event_date) ++ ++ 7500 ++
++ ++ ++ ++ system ++ query_thread_log
++ toYYYYMM(event_date) ++ ++ 7500 ++
++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ *_dictionary.xml ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ /clickhouse/task_queue/ddl ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ click_cost ++ any ++ ++ 0 ++ 3600 ++ ++ ++ 86400 ++ 60 ++ ++ ++ ++ max ++ ++ 0 ++ 60 ++ ++ ++ 3600 ++ 300 ++ ++ ++ 86400 ++ 3600 ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ /var/lib/clickhouse//format_schemas/ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++
+ +changed: [centos_7] + +TASK [ansible-clickhouse : Config | Generate users config] ********************* +--- before ++++ after: /home/.ansible/tmp/ansible-local-4875ocmz5_ty/tmp0w4na9zy/users.j2 +@@ -0,0 +1,106 @@ ++ ++ ++ ++ ++ ++ ++ ++ 10000000000 ++ 0 ++ random ++ 100 ++ ++ ++ 1 ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ::1 ++ 127.0.0.1 ++ ++ default ++ default ++ ++ ++ ++ ++ ++ ::1 ++ 127.0.0.1 ++ ++ readonly ++ default ++ ++ ++ ++ ++ f2ca1bb6c7e907d06dafe4687e579fce76b37e4e93b7605022da52e6ccc26fd2 ++ ++ ::1 ++ 127.0.0.1 ++ ++ default ++ default ++ ++ testu1 ++ ++ ++ ++ ++ testplpassword ++ ++ ::1 ++ 127.0.0.1 ++ ++ default ++ default ++ ++ testu2 ++ ++ ++ ++ ++ testplpassword ++ ++ 192.168.0.0/24 ++ 10.0.0.0/8 ++ ++ default ++ default ++ ++ testu1 ++ testu2 ++ testu3 ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ 3600 ++ 0 ++ 0 ++ 0 ++ 0 ++ 0 ++ ++ ++ ++ + +changed: [centos_7] + +TASK [ansible-clickhouse : Config | Generate remote_servers config] ************ +skipping: [centos_7] + +TASK [ansible-clickhouse : Config | Generate macros config] ******************** +skipping: [centos_7] + +TASK [ansible-clickhouse : Config | Generate zookeeper servers config] ********* +skipping: [centos_7] + +TASK [ansible-clickhouse : Config | Fix interserver_http_port and intersever_https_port collision] *** +skipping: [centos_7] + +TASK [ansible-clickhouse : Notify Handlers Now] ******************************** + +RUNNING HANDLER [ansible-clickhouse : Restart Clickhouse Service] ************** +ok: [centos_7] + +TASK [ansible-clickhouse : include_tasks] ************************************** +included: /home/08-ansible-05/ansible-clickhouse/tasks/service.yml for centos_7 + +TASK [ansible-clickhouse : Ensure clickhouse-server.service is enabled: True and state: restarted] *** +fatal: [centos_7]: FAILED! => {"changed": false, "msg": "Service is in unknown state", "status": {}} + +PLAY RECAP ********************************************************************* +centos_7 : ok=18 changed=7 unreachable=0 failed=1 skipped=6 rescued=0 ignored=0 + +CRITICAL Ansible return code was 2, command was: ['ansible-playbook', '-D', '--inventory', '/home/.cache/molecule/ansible-clickhouse/centos_7/inventory', '--skip-tags', 'molecule-notest,notest', '/home/08-ansible-05/ansible-clickhouse/molecule/resources/playbooks/converge.yml'] +WARNING An error occurred during the test sequence action: 'converge'. Cleaning up. +INFO Inventory /home/08-ansible-05/ansible-clickhouse/molecule/centos_7/../resources/inventory/hosts.yml linked to + /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/hosts +INFO Inventory /home/08-ansible-05/ansible-clickhouse/molecule/centos_7/../resources/inventory/group_vars/ linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/group_vars +INFO Inventory /home/08-ansible-05/ansible-clickhouse/molecule/centos_7/../resources/inventory/host_vars/ linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/host_vars +INFO Running centos_7 > cleanup +WARNING Skipping, cleanup playbook not configured. +INFO Inventory /home/08-ansible-05/ansible-clickhouse/molecule/centos_7/../resources/inventory/hosts.yml linked to + /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/hosts +INFO Inventory /home/08-ansible-05/ansible-clickhouse/molecule/centos_7/../resources/inventory/group_vars/ linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/group_vars +INFO Inventory /home/08-ansible-05/ansible-clickhouse/molecule/centos_7/../resources/inventory/host_vars/ linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/host_vars +INFO Running centos_7 > destroy + +PLAY [Destroy] ***************************************************************** + +TASK [Destroy molecule instance(s)] ******************************************** +changed: [localhost] => (item=centos_7) + +TASK [Wait for instance(s) deletion to complete] ******************************* +FAILED - RETRYING: [localhost]: Wait for instance(s) deletion to complete (300 retries left). +changed: [localhost] => (item=centos_7) + +TASK [Delete docker networks(s)] *********************************************** + +PLAY RECAP ********************************************************************* +localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 + +INFO Pruning extra files from scenario ephemeral directory +┌──(kali㉿kali)-[~/08-ansible-05/clickhouse] +└─$ +``` + + +Тест выявил множество ошибок как правил YAML так и Ansible. +Большинство нарушений Ansible в данном примере - это так называемые **FQCN** (Fully Qualified Collection Name) - использование короткого имени модуля (например, `yum:` вместо `ansible.builtin.yum:`). +И вишенкой на торте является "провал", **playbook** " на шаге проверки старта `clickhouse-server.service` - из-за некорректной работы в **Docker** контейнере, где наблюдаются проблемы с функционированием **SystemD**. + +--- + +Отчёт по модифицированному **playbook**, с использованием [docker systemctl replacement](https://github.com/gdraheim/docker-systemctl-replacement) от **Guido Draheim** + +Используемое окружение на **Ubuntu 22**: + +```console +root@ubuntu:~$ molecule --version +molecule 4.0.1 using python 3.10 + ansible:2.13.3 + delegated:4.0.1 from molecule + docker:2.0.0 from molecule_docker requiring collections: community.docker>=3.0.0-a2 +root@ubuntu:~$ +``` + +Модифицированный `molecule/resources/playbooks/converge.yml` + +```yaml +--- +- name: Converge + hosts: all + pre_tasks: + - name: Download SystemD replacer + become: true + ansible.builtin.get_url: + url: https://raw.githubusercontent.com/gdraheim/docker-systemctl-replacement/master/files/docker/systemctl.py + dest: /systemctl.py + mode: "a+x" + - name: Replace systemctl + become: true + ansible.builtin.copy: + src: /systemctl.py + remote_src: true + dest: /usr/bin/systemctl + force: true + tasks: + - name: 'Apply Clickhouse Role' + include_role: + name: ansible-clickhouse +``` + +Тест molecule: + +```console +root@ubuntu:~/ansible-clickhouse$ molecule test -s centos_7 +INFO centos_7 scenario test matrix: dependency, lint, cleanup, destroy, syntax, create, prepare, converge, idempotence, side_effect, verify, cleanup, destroy +INFO Performing prerun with role_name_check=0... +INFO Set ANSIBLE_LIBRARY=/home/.cache/ansible-compat/b9a93c/modules:/home/.ansible/plugins/modules:/usr/share/ansible/plugins/modules +INFO Set ANSIBLE_COLLECTIONS_PATH=/home/.cache/ansible-compat/b9a93c/../home/.ansible:/usr/share/ansible +INFO Set ANSIBLE_ROLES_PATH=/home/.cache/ansible-compat/b9a93c/roles:/home/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles +INFO Using /home/.cache/ansible-compat/b9a93c/roles/alexeysetevoi.clickhouse symlink to current repository in order to enable Ansible to find the role using its expected full name. +INFO Inventory /home/ansible-clickhouse/molecule/centos_7/../resources/inventory/hosts.yml linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/hosts +INFO Inventory /home/ansible-clickhouse/molecule/centos_7/../resources/inventory/group_vars/ linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/group_vars +INFO Inventory /home/ansible-clickhouse/molecule/centos_7/../resources/inventory/host_vars/ linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/host_vars +INFO Running centos_7 > dependency +WARNING Skipping, missing the requirements file. +WARNING Skipping, missing the requirements file. +INFO Inventory /home/ansible-clickhouse/molecule/centos_7/../resources/inventory/hosts.yml linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/hosts +INFO Inventory /home/ansible-clickhouse/molecule/centos_7/../resources/inventory/group_vars/ linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/group_vars +INFO Inventory /home/ansible-clickhouse/molecule/centos_7/../resources/inventory/host_vars/ linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/host_vars +INFO Running centos_7 > lint +WARNING Listing 61 violation(s) that are fatal +fqcn-builtins: Use FQCN for builtin actions. +handlers/main.yml:3 Task/Handler: Restart Clickhouse Service + +schema: {'name': 'EL', 'versions': [7, 8]} is not valid under any of the given schemas (schema[meta]) +meta/main.yml:1 Returned errors will not include exact line numbers, but they will mention +the schema name being used as a tag, like ``playbook-schema``, +``tasks-schema``. + +This rule is not skippable and stops further processing of the file. + +Schema bugs should be reported towards (https://github.com/ansible/schemas) project instead of ansible-lint. + +If incorrect schema was picked, you might want to either: + +* move the file to standard location, so its file is detected correctly. +* use ``kinds:`` option in linter config to help it pick correct file type. + + +fqcn-builtins: Use FQCN for builtin actions. +molecule/centos_7/converge.yml:5 Task/Handler: Include ansible-clickhouse + +fqcn-builtins: Use FQCN for builtin actions. +molecule/centos_7/verify.yml:8 Task/Handler: Example assertion + +fqcn-builtins: Use FQCN for builtin actions. +molecule/centos_8/converge.yml:5 Task/Handler: Include ansible-clickhouse + +fqcn-builtins: Use FQCN for builtin actions. +molecule/centos_8/verify.yml:8 Task/Handler: Example assertion + +schema: None is not of type 'object' (schema[inventory]) +molecule/resources/inventory/hosts.yml:1 Returned errors will not include exact line numbers, but they will mention +the schema name being used as a tag, like ``playbook-schema``, +``tasks-schema``. + +This rule is not skippable and stops further processing of the file. + +Schema bugs should be reported towards (https://github.com/ansible/schemas) project instead of ansible-lint. + +If incorrect schema was picked, you might want to either: + +* move the file to standard location, so its file is detected correctly. +* use ``kinds:`` option in linter config to help it pick correct file type. + + +risky-file-permissions: File permissions unset or incorrect. +molecule/resources/playbooks/converge.yml:11 Task/Handler: Replace systemctl + +fqcn-builtins: Use FQCN for builtin actions. +molecule/resources/playbooks/converge.yml:19 Task/Handler: Apply Clickhouse Role + +fqcn-builtins: Use FQCN for builtin actions. +molecule/ubuntu_focal/converge.yml:5 Task/Handler: Include ansible-clickhouse + +fqcn-builtins: Use FQCN for builtin actions. +molecule/ubuntu_focal/verify.yml:8 Task/Handler: Example assertion + +fqcn-builtins: Use FQCN for builtin actions. +tasks/configure/db.yml:2 Task/Handler: Set ClickHose Connection String + +jinja: Jinja2 spacing could be improved: clickhouse-client -h 127.0.0.1 --port {{ clickhouse_tcp_secure_port | default(clickhouse_tcp_port) }}{{' --secure' if clickhouse_tcp_secure_port is defined else '' }} -> clickhouse-client -h 127.0.0.1 --port {{ clickhouse_tcp_secure_port | default(clickhouse_tcp_port) }}{{ ' --secure' if clickhouse_tcp_secure_port is defined else '' }} (jinja[spacing]) +tasks/configure/db.yml:2 Jinja2 template rewrite recommendation: `clickhouse-client -h 127.0.0.1 --port {{ clickhouse_tcp_secure_port | default(clickhouse_tcp_port) }}{{ ' --secure' if clickhouse_tcp_secure_port is defined else '' }}`. + +fqcn-builtins: Use FQCN for builtin actions. +tasks/configure/db.yml:5 Task/Handler: Gather list of existing databases + +fqcn-builtins: Use FQCN for builtin actions. +tasks/configure/db.yml:11 Task/Handler: Config | Delete database config + +fqcn-builtins: Use FQCN for builtin actions. +tasks/configure/db.yml:20 Task/Handler: Config | Create database config + +fqcn-builtins: Use FQCN for builtin actions. +tasks/configure/dict.yml:2 Task/Handler: Config | Generate dictionary config + +jinja: Jinja2 spacing could be improved: clickhouse_dicts|length>0 -> clickhouse_dicts | length > 0 (jinja[spacing]) +tasks/configure/dict.yml:2 Jinja2 template rewrite recommendation: `clickhouse_dicts | length > 0`. + +fqcn-builtins: Use FQCN for builtin actions. +tasks/configure/sys.yml:2 Task/Handler: Check clickhouse config, data and logs + +fqcn-builtins: Use FQCN for builtin actions. +tasks/configure/sys.yml:17 Task/Handler: Config | Create config.d folder + +fqcn-builtins: Use FQCN for builtin actions. +tasks/configure/sys.yml:26 Task/Handler: Config | Create users.d folder + +fqcn-builtins: Use FQCN for builtin actions. +tasks/configure/sys.yml:35 Task/Handler: Config | Generate system config + +fqcn-builtins: Use FQCN for builtin actions. +tasks/configure/sys.yml:45 Task/Handler: Config | Generate users config + +fqcn-builtins: Use FQCN for builtin actions. +tasks/configure/sys.yml:54 Task/Handler: Config | Generate remote_servers config + +fqcn-builtins: Use FQCN for builtin actions. +tasks/configure/sys.yml:65 Task/Handler: Config | Generate macros config + +fqcn-builtins: Use FQCN for builtin actions. +tasks/configure/sys.yml:76 Task/Handler: Config | Generate zookeeper servers config + +fqcn-builtins: Use FQCN for builtin actions. +tasks/configure/sys.yml:87 Task/Handler: Config | Fix interserver_http_port and intersever_https_port collision + +fqcn-builtins: Use FQCN for builtin actions. +tasks/install/apt.yml:5 Task/Handler: Install by APT | Apt-key add repo key + +fqcn-builtins: Use FQCN for builtin actions. +tasks/install/apt.yml:12 Task/Handler: Install by APT | Remove old repo + +fqcn-builtins: Use FQCN for builtin actions. +tasks/install/apt.yml:20 Task/Handler: Install by APT | Repo installation + +fqcn-builtins: Use FQCN for builtin actions. +tasks/install/apt.yml:27 Task/Handler: Install by APT | Package installation + +fqcn-builtins: Use FQCN for builtin actions. +tasks/install/apt.yml:36 Task/Handler: Install by APT | Package installation + +fqcn-builtins: Use FQCN for builtin actions. +tasks/install/apt.yml:45 Task/Handler: Hold specified version during APT upgrade | Package installation + +risky-file-permissions: File permissions unset or incorrect. +tasks/install/apt.yml:45 Task/Handler: Hold specified version during APT upgrade | Package installation + +fqcn-builtins: Use FQCN for builtin actions. +tasks/install/dnf.yml:5 Task/Handler: Install by YUM | Ensure clickhouse repo GPG key imported + +fqcn-builtins: Use FQCN for builtin actions. +tasks/install/dnf.yml:12 Task/Handler: Install by YUM | Ensure clickhouse repo installed + +fqcn-builtins: Use FQCN for builtin actions. +tasks/install/dnf.yml:24 Task/Handler: Install by YUM | Ensure clickhouse package installed (latest) + +fqcn-builtins: Use FQCN for builtin actions. +tasks/install/dnf.yml:32 Task/Handler: Install by YUM | Ensure clickhouse package installed (version {{ clickhouse_version }}) + +fqcn-builtins: Use FQCN for builtin actions. +tasks/install/yum.yml:5 Task/Handler: Install by YUM | Ensure clickhouse repo installed + +fqcn-builtins: Use FQCN for builtin actions. +tasks/install/yum.yml:16 Task/Handler: Install by YUM | Ensure clickhouse package installed (latest) + +fqcn-builtins: Use FQCN for builtin actions. +tasks/install/yum.yml:24 Task/Handler: Install by YUM | Ensure clickhouse package installed (version {{ clickhouse_version }}) + +fqcn-builtins: Use FQCN for builtin actions. +tasks/main.yml:3 Task/Handler: Include OS Family Specific Variables + +fqcn-builtins: Use FQCN for builtin actions. +tasks/main.yml:39 Task/Handler: Notify Handlers Now + +fqcn-builtins: Use FQCN for builtin actions. +tasks/main.yml:45 Task/Handler: Wait for Clickhouse Server to Become Ready + +jinja: Jinja2 spacing could be improved: not clickhouse_remove|bool -> not clickhouse_remove | bool (jinja[spacing]) +tasks/main.yml:45 Jinja2 template rewrite recommendation: `not clickhouse_remove | bool`. + +fqcn-builtins: Use FQCN for builtin actions. +tasks/params.yml:3 Task/Handler: Set clickhouse_service_enable + +fqcn-builtins: Use FQCN for builtin actions. +tasks/params.yml:7 Task/Handler: Set clickhouse_service_ensure + +fqcn-builtins: Use FQCN for builtin actions. +tasks/precheck.yml:1 Task/Handler: Requirements check | Checking sse4_2 support + +fqcn-builtins: Use FQCN for builtin actions. +tasks/precheck.yml:5 Task/Handler: Requirements check | Not supported distribution && release + +jinja: Jinja2 spacing could be improved: not clickhouse_supported|bool -> not clickhouse_supported | bool (jinja[spacing]) +tasks/precheck.yml:5 Jinja2 template rewrite recommendation: `not clickhouse_supported | bool`. + +fqcn-builtins: Use FQCN for builtin actions. +tasks/remove.yml:3 Task/Handler: Remove clickhouse config,data and logs + +fqcn-builtins: Use FQCN for builtin actions. +tasks/remove/apt.yml:5 Task/Handler: Uninstall by APT | Package uninstallation + +fqcn-builtins: Use FQCN for builtin actions. +tasks/remove/apt.yml:12 Task/Handler: Uninstall by APT | Repo uninstallation + +fqcn-builtins: Use FQCN for builtin actions. +tasks/remove/apt.yml:18 Task/Handler: Uninstall by APT | Apt-key remove repo key + +fqcn-builtins: Use FQCN for builtin actions. +tasks/remove/dnf.yml:5 Task/Handler: Uninstall by YUM | Ensure clickhouse package uninstalled + +fqcn-builtins: Use FQCN for builtin actions. +tasks/remove/dnf.yml:12 Task/Handler: Uninstall by YUM | Ensure clickhouse repo uninstalled + +fqcn-builtins: Use FQCN for builtin actions. +tasks/remove/dnf.yml:19 Task/Handler: Uninstall by YUM | Ensure clickhouse key uninstalled + +fqcn-builtins: Use FQCN for builtin actions. +tasks/remove/yum.yml:5 Task/Handler: Uninstall by YUM | Ensure clickhouse package uninstalled + +fqcn-builtins: Use FQCN for builtin actions. +tasks/remove/yum.yml:12 Task/Handler: Uninstall by YUM | Ensure clickhouse repo uninstalled + +fqcn-builtins: Use FQCN for builtin actions. +tasks/service.yml:3 Task/Handler: Ensure {{ clickhouse_service }} is enabled: {{ clickhouse_service_enable }} and state: {{ clickhouse_service_ensure }} + +jinja: Jinja2 spacing could be improved: deb http://repo.yandex.ru/clickhouse/{{ansible_distribution_release}} stable main -> deb http://repo.yandex.ru/clickhouse/{{ ansible_distribution_release }} stable main (jinja[spacing]) +vars/debian.yml:4 Jinja2 template rewrite recommendation: `deb http://repo.yandex.ru/clickhouse/{{ ansible_distribution_release }} stable main`. + +You can skip specific rules or tags by adding them to your configuration file: +# .config/ansible-lint.yml +warn_list: # or 'skip_list' to silence them completely + - experimental # all rules tagged as experimental + - fqcn-builtins # Use FQCN for builtin actions. + +Finished with 52 failure(s), 9 warning(s) on 56 files. +INFO Inventory /home/ansible-clickhouse/molecule/centos_7/../resources/inventory/hosts.yml linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/hosts +INFO Inventory /home/ansible-clickhouse/molecule/centos_7/../resources/inventory/group_vars/ linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/group_vars +INFO Inventory /home/ansible-clickhouse/molecule/centos_7/../resources/inventory/host_vars/ linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/host_vars +INFO Running centos_7 > cleanup +WARNING Skipping, cleanup playbook not configured. +INFO Inventory /home/ansible-clickhouse/molecule/centos_7/../resources/inventory/hosts.yml linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/hosts +INFO Inventory /home/ansible-clickhouse/molecule/centos_7/../resources/inventory/group_vars/ linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/group_vars +INFO Inventory /home/ansible-clickhouse/molecule/centos_7/../resources/inventory/host_vars/ linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/host_vars +INFO Running centos_7 > destroy +INFO Sanity checks: 'docker' + +PLAY [Destroy] ***************************************************************** + +TASK [Destroy molecule instance(s)] ******************************************** +changed: [localhost] => (item=centos_7) + +TASK [Wait for instance(s) deletion to complete] ******************************* +ok: [localhost] => (item=centos_7) + +TASK [Delete docker networks(s)] *********************************************** + +PLAY RECAP ********************************************************************* +localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 + +INFO Inventory /home/ansible-clickhouse/molecule/centos_7/../resources/inventory/hosts.yml linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/hosts +INFO Inventory /home/ansible-clickhouse/molecule/centos_7/../resources/inventory/group_vars/ linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/group_vars +INFO Inventory /home/ansible-clickhouse/molecule/centos_7/../resources/inventory/host_vars/ linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/host_vars +INFO Running centos_7 > syntax + +playbook: /home/ansible-clickhouse/molecule/resources/playbooks/converge.yml +INFO Inventory /home/ansible-clickhouse/molecule/centos_7/../resources/inventory/hosts.yml linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/hosts +INFO Inventory /home/ansible-clickhouse/molecule/centos_7/../resources/inventory/group_vars/ linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/group_vars +INFO Inventory /home/ansible-clickhouse/molecule/centos_7/../resources/inventory/host_vars/ linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/host_vars +INFO Running centos_7 > create + +PLAY [Create] ****************************************************************** + +TASK [Log into a Docker registry] ********************************************** +skipping: [localhost] => (item=None) +skipping: [localhost] + +TASK [Check presence of custom Dockerfiles] ************************************ +ok: [localhost] => (item={'capabilities': ['SYS_ADMIN'], 'command': '/usr/sbin/init', 'dockerfile': '../resources/Dockerfile.j2', 'env': {'ANSIBLE_USER': 'ansible', 'DEPLOY_GROUP': 'deployer', 'SUDO_GROUP': 'wheel', 'container': 'docker'}, 'image': 'centos:7', 'name': 'centos_7', 'privileged': True, 'tmpfs': ['/run', '/tmp'], 'volumes': ['/sys/fs/cgroup:/sys/fs/cgroup']}) + +TASK [Create Dockerfiles from image names] ************************************* +changed: [localhost] => (item={'capabilities': ['SYS_ADMIN'], 'command': '/usr/sbin/init', 'dockerfile': '../resources/Dockerfile.j2', 'env': {'ANSIBLE_USER': 'ansible', 'DEPLOY_GROUP': 'deployer', 'SUDO_GROUP': 'wheel', 'container': 'docker'}, 'image': 'centos:7', 'name': 'centos_7', 'privileged': True, 'tmpfs': ['/run', '/tmp'], 'volumes': ['/sys/fs/cgroup:/sys/fs/cgroup']}) + +TASK [Discover local Docker images] ******************************************** +ok: [localhost] => (item={'diff': [], 'dest': '/home/.cache/molecule/ansible-clickhouse/centos_7/Dockerfile_centos_7', 'src': '/home/.ansible/tmp/ansible-tmp-1661954081.235305-32485-267253787192059/source', 'md5sum': 'e90d08cd34f49a5f8a41a07de1348618', 'checksum': '4b70768619482424811f2977aa277a5acf2b13a1', 'changed': True, 'uid': 1000, 'gid': 1000, 'owner': 'sa', 'group': 'sa', 'mode': '0600', 'state': 'file', 'size': 2199, 'invocation': {'module_args': {'src': '/home/.ansible/tmp/ansible-tmp-1661954081.235305-32485-267253787192059/source', 'dest': '/home/.cache/molecule/ansible-clickhouse/centos_7/Dockerfile_centos_7', 'mode': '0600', 'follow': False, '_original_basename': 'Dockerfile.j2', 'checksum': '4b70768619482424811f2977aa277a5acf2b13a1', 'backup': False, 'force': True, 'unsafe_writes': False, 'content': None, 'validate': None, 'directory_mode': None, 'remote_src': None, 'local_follow': None, 'owner': None, 'group': None, 'seuser': None, 'serole': None, 'selevel': None, 'setype': None, 'attributes': None}}, 'failed': False, 'item': {'capabilities': ['SYS_ADMIN'], 'command': '/usr/sbin/init', 'dockerfile': '../resources/Dockerfile.j2', 'env': {'ANSIBLE_USER': 'ansible', 'DEPLOY_GROUP': 'deployer', 'SUDO_GROUP': 'wheel', 'container': 'docker'}, 'image': 'centos:7', 'name': 'centos_7', 'privileged': True, 'tmpfs': ['/run', '/tmp'], 'volumes': ['/sys/fs/cgroup:/sys/fs/cgroup']}, 'ansible_loop_var': 'item', 'i': 0, 'ansible_index_var': 'i'}) + +TASK [Build an Ansible compatible image (new)] ********************************* +ok: [localhost] => (item=molecule_local/centos:7) + +TASK [Create docker network(s)] ************************************************ + +TASK [Determine the CMD directives] ******************************************** +ok: [localhost] => (item={'capabilities': ['SYS_ADMIN'], 'command': '/usr/sbin/init', 'dockerfile': '../resources/Dockerfile.j2', 'env': {'ANSIBLE_USER': 'ansible', 'DEPLOY_GROUP': 'deployer', 'SUDO_GROUP': 'wheel', 'container': 'docker'}, 'image': 'centos:7', 'name': 'centos_7', 'privileged': True, 'tmpfs': ['/run', '/tmp'], 'volumes': ['/sys/fs/cgroup:/sys/fs/cgroup']}) + +TASK [Create molecule instance(s)] ********************************************* +changed: [localhost] => (item=centos_7) + +TASK [Wait for instance(s) creation to complete] ******************************* +FAILED - RETRYING: [localhost]: Wait for instance(s) creation to complete (300 retries left). +changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': '128133508658.32615', 'results_file': '/home/.ansible_async/128133508658.32615', 'changed': True, 'item': {'capabilities': ['SYS_ADMIN'], 'command': '/usr/sbin/init', 'dockerfile': '../resources/Dockerfile.j2', 'env': {'ANSIBLE_USER': 'ansible', 'DEPLOY_GROUP': 'deployer', 'SUDO_GROUP': 'wheel', 'container': 'docker'}, 'image': 'centos:7', 'name': 'centos_7', 'privileged': True, 'tmpfs': ['/run', '/tmp'], 'volumes': ['/sys/fs/cgroup:/sys/fs/cgroup']}, 'ansible_loop_var': 'item'}) + +PLAY RECAP ********************************************************************* +localhost : ok=7 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 + +INFO Inventory /home/ansible-clickhouse/molecule/centos_7/../resources/inventory/hosts.yml linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/hosts +INFO Inventory /home/ansible-clickhouse/molecule/centos_7/../resources/inventory/group_vars/ linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/group_vars +INFO Inventory /home/ansible-clickhouse/molecule/centos_7/../resources/inventory/host_vars/ linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/host_vars +INFO Running centos_7 > prepare +WARNING Skipping, prepare playbook not configured. +INFO Inventory /home/ansible-clickhouse/molecule/centos_7/../resources/inventory/hosts.yml linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/hosts +INFO Inventory /home/ansible-clickhouse/molecule/centos_7/../resources/inventory/group_vars/ linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/group_vars +INFO Inventory /home/ansible-clickhouse/molecule/centos_7/../resources/inventory/host_vars/ linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/host_vars +INFO Running centos_7 > converge + +PLAY [Converge] **************************************************************** + +TASK [Gathering Facts] ********************************************************* +ok: [centos_7] + +TASK [Download SystemD replacer] *********************************************** +changed: [centos_7] + +TASK [Replace systemctl] ******************************************************* +changed: [centos_7] + +TASK [Apply Clickhouse Role] *************************************************** + +TASK [ansible-clickhouse : Include OS Family Specific Variables] *************** +ok: [centos_7] + +TASK [ansible-clickhouse : include_tasks] ************************************** +included: /home/ansible-clickhouse/tasks/precheck.yml for centos_7 + +TASK [ansible-clickhouse : Requirements check | Checking sse4_2 support] ******* +ok: [centos_7] + +TASK [ansible-clickhouse : Requirements check | Not supported distribution && release] *** +skipping: [centos_7] + +TASK [ansible-clickhouse : include_tasks] ************************************** +included: /home/ansible-clickhouse/tasks/params.yml for centos_7 + +TASK [ansible-clickhouse : Set clickhouse_service_enable] ********************** +ok: [centos_7] + +TASK [ansible-clickhouse : Set clickhouse_service_ensure] ********************** +ok: [centos_7] + +TASK [ansible-clickhouse : include_tasks] ************************************** +included: /home/ansible-clickhouse/tasks/install/yum.yml for centos_7 + +TASK [ansible-clickhouse : Install by YUM | Ensure clickhouse repo installed] *** +--- before: /etc/yum.repos.d/clickhouse.repo ++++ after: /etc/yum.repos.d/clickhouse.repo +@@ -0,0 +1,7 @@ ++[clickhouse] ++async = 1 ++baseurl = https://packages.clickhouse.com/rpm/stable/ ++enabled = 1 ++gpgcheck = 0 ++name = Clickhouse repo ++ + +changed: [centos_7] + +TASK [ansible-clickhouse : Install by YUM | Ensure clickhouse package installed (latest)] *** +changed: [centos_7] + +TASK [ansible-clickhouse : Install by YUM | Ensure clickhouse package installed (version latest)] *** +skipping: [centos_7] + +TASK [ansible-clickhouse : include_tasks] ************************************** +included: /home/ansible-clickhouse/tasks/configure/sys.yml for centos_7 + +TASK [ansible-clickhouse : Check clickhouse config, data and logs] ************* +ok: [centos_7] => (item=/var/log/clickhouse-server) +--- before ++++ after +@@ -1,4 +1,4 @@ + { +- "mode": "0700", ++ "mode": "0770", + "path": "/etc/clickhouse-server" + } + +changed: [centos_7] => (item=/etc/clickhouse-server) +--- before ++++ after +@@ -1,7 +1,7 @@ + { +- "group": 0, +- "mode": "0755", +- "owner": 0, ++ "group": 996, ++ "mode": "0770", ++ "owner": 999, + "path": "/var/lib/clickhouse/tmp/", +- "state": "absent" ++ "state": "directory" + } + +changed: [centos_7] => (item=/var/lib/clickhouse/tmp/) +--- before ++++ after +@@ -1,4 +1,4 @@ + { +- "mode": "0700", ++ "mode": "0770", + "path": "/var/lib/clickhouse/" + } + +changed: [centos_7] => (item=/var/lib/clickhouse/) + +TASK [ansible-clickhouse : Config | Create config.d folder] ******************** +--- before ++++ after +@@ -1,4 +1,4 @@ + { +- "mode": "0500", ++ "mode": "0770", + "path": "/etc/clickhouse-server/config.d" + } + +changed: [centos_7] + +TASK [ansible-clickhouse : Config | Create users.d folder] ********************* +--- before ++++ after +@@ -1,4 +1,4 @@ + { +- "mode": "0500", ++ "mode": "0770", + "path": "/etc/clickhouse-server/users.d" + } + +changed: [centos_7] + +TASK [ansible-clickhouse : Config | Generate system config] ******************** +--- before ++++ after: /home/.ansible/tmp/ansible-local-327410lkn5fza/tmp355vng43/config.j2 +@@ -0,0 +1,382 @@ ++ ++ ++ ++ ++ ++ trace ++ /var/log/clickhouse-server/clickhouse-server.log ++ /var/log/clickhouse-server/clickhouse-server.err.log ++ 1000M ++ 10 ++ ++ ++ 8123 ++ ++ 9000 ++ ++ ++ ++ ++ ++ /etc/clickhouse-server/server.crt ++ /etc/clickhouse-server/server.key ++ ++ /etc/clickhouse-server/dhparam.pem ++ none ++ true ++ true ++ sslv2,sslv3 ++ true ++ ++ ++ ++ true ++ true ++ sslv2,sslv3 ++ true ++ ++ ++ ++ RejectCertificateHandler ++ ++ ++ ++ ++ ++ ++ ++ ++ 9009 ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ 127.0.0.1 ++ ++ 2048 ++ 3 ++ ++ ++ 100 ++ ++ ++ ++ ++ ++ 8589934592 ++ ++ ++ 5368709120 ++ ++ ++ ++ /var/lib/clickhouse/ ++ ++ ++ /var/lib/clickhouse/tmp/ ++ ++ ++ /var/lib/clickhouse/user_files/ ++ ++ ++ users.xml ++ ++ ++ default ++ ++ ++ ++ ++ ++ default ++ ++ ++ ++ ++ ++ ++ ++ ++ False ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ 3600 ++ ++ ++ ++ True ++ ++ ++ 3600 ++ ++ ++ 60 ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ system ++ query_log
++ ++ toYYYYMM(event_date) ++ ++ 7500 ++
++ ++ ++ ++ system ++ query_thread_log
++ toYYYYMM(event_date) ++ ++ 7500 ++
++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ *_dictionary.xml ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ /clickhouse/task_queue/ddl ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ click_cost ++ any ++ ++ 0 ++ 3600 ++ ++ ++ 86400 ++ 60 ++ ++ ++ ++ max ++ ++ 0 ++ 60 ++ ++ ++ 3600 ++ 300 ++ ++ ++ 86400 ++ 3600 ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ /var/lib/clickhouse//format_schemas/ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++
+ +changed: [centos_7] + +TASK [ansible-clickhouse : Config | Generate users config] ********************* +--- before ++++ after: /home/.ansible/tmp/ansible-local-327410lkn5fza/tmprm7rasbr/users.j2 +@@ -0,0 +1,106 @@ ++ ++ ++ ++ ++ ++ ++ ++ 10000000000 ++ 0 ++ random ++ 100 ++ ++ ++ 1 ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ::1 ++ 127.0.0.1 ++ ++ default ++ default ++ ++ ++ ++ ++ ++ ::1 ++ 127.0.0.1 ++ ++ readonly ++ default ++ ++ ++ ++ ++ f2ca1bb6c7e907d06dafe4687e579fce76b37e4e93b7605022da52e6ccc26fd2 ++ ++ ::1 ++ 127.0.0.1 ++ ++ default ++ default ++ ++ testu1 ++ ++ ++ ++ ++ testplpassword ++ ++ ::1 ++ 127.0.0.1 ++ ++ default ++ default ++ ++ testu2 ++ ++ ++ ++ ++ testplpassword ++ ++ 192.168.0.0/24 ++ 10.0.0.0/8 ++ ++ default ++ default ++ ++ testu1 ++ testu2 ++ testu3 ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ 3600 ++ 0 ++ 0 ++ 0 ++ 0 ++ 0 ++ ++ ++ ++ + +changed: [centos_7] + +TASK [ansible-clickhouse : Config | Generate remote_servers config] ************ +skipping: [centos_7] + +TASK [ansible-clickhouse : Config | Generate macros config] ******************** +skipping: [centos_7] + +TASK [ansible-clickhouse : Config | Generate zookeeper servers config] ********* +skipping: [centos_7] + +TASK [ansible-clickhouse : Config | Fix interserver_http_port and intersever_https_port collision] *** +skipping: [centos_7] + +TASK [ansible-clickhouse : Notify Handlers Now] ******************************** + +RUNNING HANDLER [ansible-clickhouse : Restart Clickhouse Service] ************** +ok: [centos_7] + +TASK [ansible-clickhouse : include_tasks] ************************************** +included: /home/ansible-clickhouse/tasks/service.yml for centos_7 + +TASK [ansible-clickhouse : Ensure clickhouse-server.service is enabled: True and state: restarted] *** +changed: [centos_7] + +TASK [ansible-clickhouse : Wait for Clickhouse Server to Become Ready] ********* +ok: [centos_7] + +TASK [ansible-clickhouse : include_tasks] ************************************** +included: /home/ansible-clickhouse/tasks/configure/db.yml for centos_7 + +TASK [ansible-clickhouse : Set ClickHose Connection String] ******************** +ok: [centos_7] + +TASK [ansible-clickhouse : Gather list of existing databases] ****************** +ok: [centos_7] + +TASK [ansible-clickhouse : Config | Delete database config] ******************** +skipping: [centos_7] => (item={'name': 'testu1'}) +skipping: [centos_7] => (item={'name': 'testu2'}) +skipping: [centos_7] => (item={'name': 'testu3'}) +skipping: [centos_7] => (item={'name': 'testu4', 'state': 'absent'}) + +TASK [ansible-clickhouse : Config | Create database config] ******************** +changed: [centos_7] => (item={'name': 'testu1'}) +changed: [centos_7] => (item={'name': 'testu2'}) +changed: [centos_7] => (item={'name': 'testu3'}) +skipping: [centos_7] => (item={'name': 'testu4', 'state': 'absent'}) + +TASK [ansible-clickhouse : include_tasks] ************************************** +included: /home/ansible-clickhouse/tasks/configure/dict.yml for centos_7 + +TASK [ansible-clickhouse : Config | Generate dictionary config] **************** +--- before ++++ after: /home/.ansible/tmp/ansible-local-327410lkn5fza/tmp08paiyou/dicts.j2 +@@ -0,0 +1,63 @@ ++ ++ ++ ++ ++ test_dict ++ ++ ++ DSN=testdb ++ dict_source
++
++ ++ ++ 300 ++ 360 ++ ++ ++ ++ ++ ++ ++ testIntKey ++ ++ ++ testAttrName ++ UInt32 ++ 0 ++ ++ ++
++ ++ test_dict ++ ++ ++ DSN=testdb ++ dict_source
++
++ ++ ++ 300 ++ 360 ++ ++ ++ ++ ++ ++ ++ ++ testAttrComplexName ++ String ++ ++ ++ ++ testAttrName ++ String ++ ++ ++ ++
++
+ +changed: [centos_7] + +TASK [ansible-clickhouse : include_tasks] ************************************** +skipping: [centos_7] + +PLAY RECAP ********************************************************************* +centos_7 : ok=28 changed=12 unreachable=0 failed=0 skipped=8 rescued=0 ignored=0 + +INFO Inventory /home/ansible-clickhouse/molecule/centos_7/../resources/inventory/hosts.yml linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/hosts +INFO Inventory /home/ansible-clickhouse/molecule/centos_7/../resources/inventory/group_vars/ linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/group_vars +INFO Inventory /home/ansible-clickhouse/molecule/centos_7/../resources/inventory/host_vars/ linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/host_vars +INFO Running centos_7 > idempotence + +PLAY [Converge] **************************************************************** + +TASK [Gathering Facts] ********************************************************* +ok: [centos_7] + +TASK [Download SystemD replacer] *********************************************** +ok: [centos_7] + +TASK [Replace systemctl] ******************************************************* +ok: [centos_7] + +TASK [Apply Clickhouse Role] *************************************************** + +TASK [ansible-clickhouse : Include OS Family Specific Variables] *************** +ok: [centos_7] + +TASK [ansible-clickhouse : include_tasks] ************************************** +included: /home/ansible-clickhouse/tasks/precheck.yml for centos_7 + +TASK [ansible-clickhouse : Requirements check | Checking sse4_2 support] ******* +ok: [centos_7] + +TASK [ansible-clickhouse : Requirements check | Not supported distribution && release] *** +skipping: [centos_7] + +TASK [ansible-clickhouse : include_tasks] ************************************** +included: /home/ansible-clickhouse/tasks/params.yml for centos_7 + +TASK [ansible-clickhouse : Set clickhouse_service_enable] ********************** +ok: [centos_7] + +TASK [ansible-clickhouse : Set clickhouse_service_ensure] ********************** +ok: [centos_7] + +TASK [ansible-clickhouse : include_tasks] ************************************** +included: /home/ansible-clickhouse/tasks/install/yum.yml for centos_7 + +TASK [ansible-clickhouse : Install by YUM | Ensure clickhouse repo installed] *** +ok: [centos_7] + +TASK [ansible-clickhouse : Install by YUM | Ensure clickhouse package installed (latest)] *** +ok: [centos_7] + +TASK [ansible-clickhouse : Install by YUM | Ensure clickhouse package installed (version latest)] *** +skipping: [centos_7] + +TASK [ansible-clickhouse : include_tasks] ************************************** +included: /home/ansible-clickhouse/tasks/configure/sys.yml for centos_7 + +TASK [ansible-clickhouse : Check clickhouse config, data and logs] ************* +ok: [centos_7] => (item=/var/log/clickhouse-server) +ok: [centos_7] => (item=/etc/clickhouse-server) +ok: [centos_7] => (item=/var/lib/clickhouse/tmp/) +ok: [centos_7] => (item=/var/lib/clickhouse/) + +TASK [ansible-clickhouse : Config | Create config.d folder] ******************** +ok: [centos_7] + +TASK [ansible-clickhouse : Config | Create users.d folder] ********************* +ok: [centos_7] + +TASK [ansible-clickhouse : Config | Generate system config] ******************** +ok: [centos_7] + +TASK [ansible-clickhouse : Config | Generate users config] ********************* +ok: [centos_7] + +TASK [ansible-clickhouse : Config | Generate remote_servers config] ************ +skipping: [centos_7] + +TASK [ansible-clickhouse : Config | Generate macros config] ******************** +skipping: [centos_7] + +TASK [ansible-clickhouse : Config | Generate zookeeper servers config] ********* +skipping: [centos_7] + +TASK [ansible-clickhouse : Config | Fix interserver_http_port and intersever_https_port collision] *** +skipping: [centos_7] + +TASK [ansible-clickhouse : Notify Handlers Now] ******************************** + +TASK [ansible-clickhouse : include_tasks] ************************************** +included: /home/ansible-clickhouse/tasks/service.yml for centos_7 + +TASK [ansible-clickhouse : Ensure clickhouse-server.service is enabled: True and state: started] *** +ok: [centos_7] + +TASK [ansible-clickhouse : Wait for Clickhouse Server to Become Ready] ********* +ok: [centos_7] + +TASK [ansible-clickhouse : include_tasks] ************************************** +included: /home/ansible-clickhouse/tasks/configure/db.yml for centos_7 + +TASK [ansible-clickhouse : Set ClickHose Connection String] ******************** +ok: [centos_7] + +TASK [ansible-clickhouse : Gather list of existing databases] ****************** +ok: [centos_7] + +TASK [ansible-clickhouse : Config | Delete database config] ******************** +skipping: [centos_7] => (item={'name': 'testu1'}) +skipping: [centos_7] => (item={'name': 'testu2'}) +skipping: [centos_7] => (item={'name': 'testu3'}) +skipping: [centos_7] => (item={'name': 'testu4', 'state': 'absent'}) + +TASK [ansible-clickhouse : Config | Create database config] ******************** +skipping: [centos_7] => (item={'name': 'testu1'}) +skipping: [centos_7] => (item={'name': 'testu2'}) +skipping: [centos_7] => (item={'name': 'testu3'}) +skipping: [centos_7] => (item={'name': 'testu4', 'state': 'absent'}) + +TASK [ansible-clickhouse : include_tasks] ************************************** +included: /home/ansible-clickhouse/tasks/configure/dict.yml for centos_7 + +TASK [ansible-clickhouse : Config | Generate dictionary config] **************** +ok: [centos_7] + +TASK [ansible-clickhouse : include_tasks] ************************************** +skipping: [centos_7] + +PLAY RECAP ********************************************************************* +centos_7 : ok=26 changed=0 unreachable=0 failed=0 skipped=9 rescued=0 ignored=0 + +INFO Idempotence completed successfully. +INFO Inventory /home/ansible-clickhouse/molecule/centos_7/../resources/inventory/hosts.yml linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/hosts +INFO Inventory /home/ansible-clickhouse/molecule/centos_7/../resources/inventory/group_vars/ linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/group_vars +INFO Inventory /home/ansible-clickhouse/molecule/centos_7/../resources/inventory/host_vars/ linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/host_vars +INFO Running centos_7 > side_effect +WARNING Skipping, side effect playbook not configured. +INFO Inventory /home/ansible-clickhouse/molecule/centos_7/../resources/inventory/hosts.yml linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/hosts +INFO Inventory /home/ansible-clickhouse/molecule/centos_7/../resources/inventory/group_vars/ linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/group_vars +INFO Inventory /home/ansible-clickhouse/molecule/centos_7/../resources/inventory/host_vars/ linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/host_vars +INFO Running centos_7 > verify +INFO Running Ansible Verifier + +PLAY [Verify] ****************************************************************** + +TASK [Example assertion] ******************************************************* +ok: [centos_7] => { + "changed": false, + "msg": "All assertions passed" +} + +PLAY RECAP ********************************************************************* +centos_7 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 + +INFO Verifier completed successfully. +INFO Inventory /home/ansible-clickhouse/molecule/centos_7/../resources/inventory/hosts.yml linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/hosts +INFO Inventory /home/ansible-clickhouse/molecule/centos_7/../resources/inventory/group_vars/ linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/group_vars +INFO Inventory /home/ansible-clickhouse/molecule/centos_7/../resources/inventory/host_vars/ linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/host_vars +INFO Running centos_7 > cleanup +WARNING Skipping, cleanup playbook not configured. +INFO Inventory /home/ansible-clickhouse/molecule/centos_7/../resources/inventory/hosts.yml linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/hosts +INFO Inventory /home/ansible-clickhouse/molecule/centos_7/../resources/inventory/group_vars/ linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/group_vars +INFO Inventory /home/ansible-clickhouse/molecule/centos_7/../resources/inventory/host_vars/ linked to /home/.cache/molecule/ansible-clickhouse/centos_7/inventory/host_vars +INFO Running centos_7 > destroy + +PLAY [Destroy] ***************************************************************** + +TASK [Destroy molecule instance(s)] ******************************************** +changed: [localhost] => (item=centos_7) + +TASK [Wait for instance(s) deletion to complete] ******************************* +FAILED - RETRYING: [localhost]: Wait for instance(s) deletion to complete (300 retries left). +changed: [localhost] => (item=centos_7) + +TASK [Delete docker networks(s)] *********************************************** + +PLAY RECAP ********************************************************************* +localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 + +INFO Pruning extra files from scenario ephemeral directory +root@ubuntu:~/ansible-clickhouse$ +''' + +--- + + +### 2. Перейдите в каталог с ролью **vector-role** и создайте сценарий тестирования по умолчанию при помощи `molecule init scenario --driver-name docker`. + +> Сценарий по умолчанию - **default** +```console +root@ubuntu:~/vector-role$ molecule init scenario --driver-name docker +INFO Initializing new scenario default... +INFO Initialized scenario in /home/vector-role/molecule/default successfully. +root@ubuntu:~/vector-role$ +``` + +--- + +### 3. Добавьте несколько разных дистрибутивов (centos:8, ubuntu:latest) для инстансов и протестируйте роль, исправьте найденные ошибки, если они есть. + +Для работы в **Docker** были использованы готовые образы за авторством [pycontribs](https://hub.docker.com/u/pycontribs): + - [CentOS](https://hub.docker.com/r/pycontribs/centos/tags) (доступные теги: `7`, `8`) - `docker.io/pycontribs/centos` + - [Ubuntu](https://hub.docker.com/r/pycontribs/ubuntu/tags) (доступен только тег `latest`) - `docker.io/pycontribs/ubuntu` + +Итоговый файл настройки **molecule** + +```yaml +--- +dependency: + name: galaxy +driver: + name: docker + options: + D: true + vv: true +platforms: + - name: centos7 + image: docker.io/pycontribs/centos:7 + pre_build_image: true + - name: centos8 + image: docker.io/pycontribs/centos:8 + pre_build_image: true + - name: ubuntu + image: docker.io/pycontribs/ubuntu:latest + pre_build_image: true +provisioner: + name: ansible + options: + D: true + vv: true + playbooks: + converge: ../resources/converge.yml + verify: ../resources/verify.yml +verifier: + name: ansible +... +``` + +--- + +### 4. Добавьте несколько assert'ов в verify.yml файл для проверки работоспособности vector-role (проверка, что конфиг валидный, проверка успешности запуска, etc). Запустите тестирование роли повторно и проверьте, что оно прошло успешно. + +Готовый **playbook** проверок: + +```yaml +--- +- name: Verify vector installation + hosts: all + gather_facts: false + tasks: + - name: Get information about vector + ansible.builtin.command: systemctl show vector + register: srv_res + changed_when: false + - name: Assert vector service + ansible.builtin.assert: + that: + - srv_res.rc == 0 + - "'ActiveState=active' in srv_res.stdout_lines" + - "'LoadState=loaded' in srv_res.stdout_lines" + - "'SubState=running' in srv_res.stdout_lines" + - name: Validate vector configuration + ansible.builtin.command: vector validate + register: vld_res + changed_when: false + - name: Assert vector healthcheck + ansible.builtin.assert: + that: + - vld_res.rc == 0 + - "'Validated' == {{ vld_res.stdout_lines | map('trim') | list }}[-1]" +... +``` + +В проверках определяется текущий статус сервиса **vector** - загружен, активен (автозапуск) и запущен. +Далее выполняется верификация конфигурационного файла и самоконтроль **Vector**. + +По хорошему статус служб лучше проверять специальным модулем `ansible.builtin.service_facts` например так: +```yaml + - name: 'Gather Local Services' + ansible.builtin.service_facts: + become: true + - name: 'Assert Vector Service' + ansible.builtin.assert: + that: + - "'{{ service_name }}' in ansible_facts.services" + - "'running' == ansible_facts.services[service_name].state" + - "'enabled' == ansible_facts.services[service_name].status" +``` +> Переменная `service_name` должна содержать имя проверяемого сервиса +К сожалению модуль **service_facts** не может определить установленный через скрипт сервис **vector** + +Лог тестирования роли с использованием Molecule: +```console +root@ubuntu:~/vector-role$ molecule test +INFO default scenario test matrix: dependency, lint, cleanup, destroy, syntax, create, prepare, converge, idempotence, side_effect, verify, cleanup, destroy +INFO Performing prerun with role_name_check=0... +INFO Set ANSIBLE_LIBRARY=/home/.cache/ansible-compat/f5bcd7/modules:/home/.ansible/plugins/modules:/usr/share/ansible/plugins/modules +INFO Set ANSIBLE_COLLECTIONS_PATH=/home/.cache/ansible-compat/f5bcd7/collections:/home/.ansible/collections:/usr/share/ansible/collections +INFO Set ANSIBLE_ROLES_PATH=/home/.cache/ansible-compat/f5bcd7/roles:/home/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles +INFO Using /home/.cache/ansible-compat/f5bcd7/roles/artem_shtepa.vector_role symlink to current repository in order to enable Ansible to find the role using its expected full name. +INFO Running default > dependency +WARNING Skipping, missing the requirements file. +WARNING Skipping, missing the requirements file. +INFO Running default > lint +INFO Lint is disabled. +INFO Running default > cleanup +WARNING Skipping, cleanup playbook not configured. +INFO Running default > destroy +INFO Sanity checks: 'docker' + +PLAY [Destroy] ***************************************************************** + +TASK [Destroy molecule instance(s)] ******************************************** +changed: [localhost] => (item=centos7) +changed: [localhost] => (item=centos8) +changed: [localhost] => (item=ubuntu) + +TASK [Wait for instance(s) deletion to complete] ******************************* +ok: [localhost] => (item=centos7) +ok: [localhost] => (item=centos8) +ok: [localhost] => (item=ubuntu) + +TASK [Delete docker networks(s)] *********************************************** + +PLAY RECAP ********************************************************************* +localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 + +INFO Running default > syntax + +playbook: /home/vector-role/molecule/resources/converge.yml +INFO Running default > create + +PLAY [Create] ****************************************************************** + +TASK [Log into a Docker registry] ********************************************** +skipping: [localhost] => (item=None) +skipping: [localhost] => (item=None) +skipping: [localhost] => (item=None) +skipping: [localhost] + +TASK [Check presence of custom Dockerfiles] ************************************ +ok: [localhost] => (item={'image': 'docker.io/pycontribs/centos:7', 'name': 'centos7', 'pre_build_image': True}) +ok: [localhost] => (item={'image': 'docker.io/pycontribs/centos:8', 'name': 'centos8', 'pre_build_image': True}) +ok: [localhost] => (item={'image': 'docker.io/pycontribs/ubuntu:latest', 'name': 'ubuntu', 'pre_build_image': True}) + +TASK [Create Dockerfiles from image names] ************************************* +skipping: [localhost] => (item={'image': 'docker.io/pycontribs/centos:7', 'name': 'centos7', 'pre_build_image': True}) +skipping: [localhost] => (item={'image': 'docker.io/pycontribs/centos:8', 'name': 'centos8', 'pre_build_image': True}) +skipping: [localhost] => (item={'image': 'docker.io/pycontribs/ubuntu:latest', 'name': 'ubuntu', 'pre_build_image': True}) + +TASK [Discover local Docker images] ******************************************** +ok: [localhost] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'item': {'image': 'docker.io/pycontribs/centos:7', 'name': 'centos7', 'pre_build_image': True}, 'ansible_loop_var': 'item', 'i': 0, 'ansible_index_var': 'i'}) +ok: [localhost] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'item': {'image': 'docker.io/pycontribs/centos:8', 'name': 'centos8', 'pre_build_image': True}, 'ansible_loop_var': 'item', 'i': 1, 'ansible_index_var': 'i'}) +ok: [localhost] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'item': {'image': 'docker.io/pycontribs/ubuntu:latest', 'name': 'ubuntu', 'pre_build_image': True}, 'ansible_loop_var': 'item', 'i': 2, 'ansible_index_var': 'i'}) + +TASK [Build an Ansible compatible image (new)] ********************************* +skipping: [localhost] => (item=molecule_local/docker.io/pycontribs/centos:7) +skipping: [localhost] => (item=molecule_local/docker.io/pycontribs/centos:8) +skipping: [localhost] => (item=molecule_local/docker.io/pycontribs/ubuntu:latest) + +TASK [Create docker network(s)] ************************************************ + +TASK [Determine the CMD directives] ******************************************** +ok: [localhost] => (item={'image': 'docker.io/pycontribs/centos:7', 'name': 'centos7', 'pre_build_image': True}) +ok: [localhost] => (item={'image': 'docker.io/pycontribs/centos:8', 'name': 'centos8', 'pre_build_image': True}) +ok: [localhost] => (item={'image': 'docker.io/pycontribs/ubuntu:latest', 'name': 'ubuntu', 'pre_build_image': True}) + +TASK [Create molecule instance(s)] ********************************************* +changed: [localhost] => (item=centos7) +changed: [localhost] => (item=centos8) +changed: [localhost] => (item=ubuntu) + +TASK [Wait for instance(s) creation to complete] ******************************* +changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': '694370910668.38264', 'results_file': '/home/.ansible_async/694370910668.38264', 'changed': True, 'item': {'image': 'docker.io/pycontribs/centos:7', 'name': 'centos7', 'pre_build_image': True}, 'ansible_loop_var': 'item'}) +FAILED - RETRYING: [localhost]: Wait for instance(s) creation to complete (300 retries left). +changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': '505681089154.38290', 'results_file': '/home/.ansible_async/505681089154.38290', 'changed': True, 'item': {'image': 'docker.io/pycontribs/centos:8', 'name': 'centos8', 'pre_build_image': True}, 'ansible_loop_var': 'item'}) +changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': '681996805273.38323', 'results_file': '/home/.ansible_async/681996805273.38323', 'changed': True, 'item': {'image': 'docker.io/pycontribs/ubuntu:latest', 'name': 'ubuntu', 'pre_build_image': True}, 'ansible_loop_var': 'item'}) + +PLAY RECAP ********************************************************************* +localhost : ok=5 changed=2 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 + +INFO Running default > prepare +WARNING Skipping, prepare playbook not configured. +INFO Running default > converge + +PLAY [Converge] **************************************************************** + +TASK [Gathering Facts] ********************************************************* +ok: [ubuntu] +ok: [centos8] +ok: [centos7] + +TASK [Detect python3 version] ************************************************** +ok: [centos8] +ok: [ubuntu] +ok: [centos7] + +TASK [Set python version of script to 3] *************************************** +skipping: [centos7] +ok: [centos8] +ok: [ubuntu] + +TASK [Download SystemD replacer v3] ******************************************** +skipping: [centos7] +changed: [ubuntu] +changed: [centos8] + +TASK [Download SystemD replacer v2] ******************************************** +skipping: [centos8] +skipping: [ubuntu] +changed: [centos7] + +TASK [Create systemd directories] ********************************************** +--- before ++++ after +@@ -1,4 +1,4 @@ + { + "path": "/run/systemd/system", +- "state": "absent" ++ "state": "directory" + } + +changed: [centos7] => (item=/run/systemd/system) +--- before ++++ after +@@ -1,4 +1,4 @@ + { + "path": "/run/systemd/system", +- "state": "absent" ++ "state": "directory" + } + +changed: [centos8] => (item=/run/systemd/system) +--- before ++++ after +@@ -1,4 +1,4 @@ + { + "path": "/run/systemd/system", +- "state": "absent" ++ "state": "directory" + } + +changed: [ubuntu] => (item=/run/systemd/system) +ok: [centos7] => (item=/usr/lib/systemd/system) +ok: [centos8] => (item=/usr/lib/systemd/system) +--- before ++++ after +@@ -1,4 +1,4 @@ + { + "path": "/usr/lib/systemd/system", +- "state": "absent" ++ "state": "directory" + } + +changed: [ubuntu] => (item=/usr/lib/systemd/system) + +TASK [Replace systemctl] ******************************************************* +changed: [centos7] +changed: [ubuntu] +changed: [centos8] + +TASK [ReCollect system info] *************************************************** +ok: [centos8] +ok: [ubuntu] +ok: [centos7] + +TASK [Get Clickhouse IP from docker engine] ************************************ +ok: [centos7 -> localhost] +ok: [centos8 -> localhost] +ok: [ubuntu -> localhost] + +TASK [Set Clickhouse IP to facts] ********************************************** +ok: [centos7] +ok: [centos8] +ok: [ubuntu] + +TASK [Include vector-role] ***************************************************** + +TASK [vector-role : Download distrib] ****************************************** +changed: [ubuntu] +changed: [centos8] +changed: [centos7] + +TASK [vector-role : Create distrib directory] ********************************** +--- before ++++ after +@@ -1,4 +1,4 @@ + { + "path": "/root/vector", +- "state": "absent" ++ "state": "directory" + } + +changed: [ubuntu] +--- before ++++ after +@@ -1,4 +1,4 @@ + { + "path": "/root/vector", +- "state": "absent" ++ "state": "directory" + } + +changed: [centos7] +--- before ++++ after +@@ -1,4 +1,4 @@ + { + "path": "/root/vector", +- "state": "absent" ++ "state": "directory" + } + +changed: [centos8] + +TASK [vector-role : Unpack vector distrib by unarchive] ************************ +changed: [centos8] +changed: [ubuntu] +changed: [centos7] + +TASK [vector-role : Install vector executable] ********************************* +changed: [centos8] +changed: [centos7] +changed: [ubuntu] + +TASK [vector-role : Create vector directories] ********************************* +--- before ++++ after +@@ -1,4 +1,4 @@ + { + "path": "/var/lib/vector", +- "state": "absent" ++ "state": "directory" + } + +changed: [centos7] => (item=/var/lib/vector) +--- before ++++ after +@@ -1,4 +1,4 @@ + { + "path": "/var/lib/vector", +- "state": "absent" ++ "state": "directory" + } + +changed: [centos8] => (item=/var/lib/vector) +--- before ++++ after +@@ -1,4 +1,4 @@ + { + "path": "/var/lib/vector", +- "state": "absent" ++ "state": "directory" + } + +changed: [ubuntu] => (item=/var/lib/vector) +--- before ++++ after +@@ -1,4 +1,4 @@ + { + "path": "/etc/vector", +- "state": "absent" ++ "state": "directory" + } + +changed: [centos7] => (item=/etc/vector) +--- before ++++ after +@@ -1,4 +1,4 @@ + { + "path": "/etc/vector", +- "state": "absent" ++ "state": "directory" + } + +changed: [centos8] => (item=/etc/vector) +--- before ++++ after +@@ -1,4 +1,4 @@ + { + "path": "/etc/vector", +- "state": "absent" ++ "state": "directory" + } + +changed: [ubuntu] => (item=/etc/vector) + +TASK [vector-role : Create test directory] ************************************* +--- before ++++ after +@@ -1,4 +1,4 @@ + { + "path": "/root/test", +- "state": "absent" ++ "state": "directory" + } + +changed: [centos7] +--- before ++++ after +@@ -1,4 +1,4 @@ + { + "path": "/root/test", +- "state": "absent" ++ "state": "directory" + } + +changed: [centos8] +--- before ++++ after +@@ -1,4 +1,4 @@ + { + "path": "/root/test", +- "state": "absent" ++ "state": "directory" + } + +changed: [ubuntu] + +TASK [vector-role : Install vector configuration] ****************************** +--- before ++++ after: /home/.ansible/tmp/ansible-local-38634vd4fbr1r/tmp2m12gnzi/vector.toml.j2 +@@ -0,0 +1,24 @@ ++# Set global options ++data_dir = "/var/lib/vector" ++ ++# Vector's API (disabled by default) ++# Enable and try it out with the `vector top` command ++[api] ++enabled = true ++address = "0.0.0.0:8686" ++ ++[sources.test_log] ++type = "file" ++ignore_older_secs = 600 ++include = [ "/root/test/*.log" ] ++read_from = "beginning" ++ ++[sinks.clickhouse] ++type = "clickhouse" ++inputs = [ "test_log" ] ++database = "logs" ++endpoint = "http://172.17.0.2:8123" ++table = "file_log" ++compression = "gzip" ++auth = { user = "user", password = "userlog", strategy = "basic" } ++skip_unknown_fields = true + +changed: [centos7] +--- before ++++ after: /home/.ansible/tmp/ansible-local-38634vd4fbr1r/tmpvgzhew_n/vector.toml.j2 +@@ -0,0 +1,24 @@ ++# Set global options ++data_dir = "/var/lib/vector" ++ ++# Vector's API (disabled by default) ++# Enable and try it out with the `vector top` command ++[api] ++enabled = true ++address = "0.0.0.0:8686" ++ ++[sources.test_log] ++type = "file" ++ignore_older_secs = 600 ++include = [ "/root/test/*.log" ] ++read_from = "beginning" ++ ++[sinks.clickhouse] ++type = "clickhouse" ++inputs = [ "test_log" ] ++database = "logs" ++endpoint = "http://172.17.0.2:8123" ++table = "file_log" ++compression = "gzip" ++auth = { user = "user", password = "userlog", strategy = "basic" } ++skip_unknown_fields = true + +changed: [ubuntu] +--- before ++++ after: /home/.ansible/tmp/ansible-local-38634vd4fbr1r/tmpdsjgvgmu/vector.toml.j2 +@@ -0,0 +1,24 @@ ++# Set global options ++data_dir = "/var/lib/vector" ++ ++# Vector's API (disabled by default) ++# Enable and try it out with the `vector top` command ++[api] ++enabled = true ++address = "0.0.0.0:8686" ++ ++[sources.test_log] ++type = "file" ++ignore_older_secs = 600 ++include = [ "/root/test/*.log" ] ++read_from = "beginning" ++ ++[sinks.clickhouse] ++type = "clickhouse" ++inputs = [ "test_log" ] ++database = "logs" ++endpoint = "http://172.17.0.2:8123" ++table = "file_log" ++compression = "gzip" ++auth = { user = "user", password = "userlog", strategy = "basic" } ++skip_unknown_fields = true + +changed: [centos8] + +TASK [vector-role : Install vector service file] ******************************* +--- before ++++ after: /home/.ansible/tmp/ansible-local-38634vd4fbr1r/tmp9t21ccx1/vector.service.j2 +@@ -0,0 +1,16 @@ ++[Unit] ++Description=Vector ++Documentation=https://vector.dev ++After=network-online.target ++Requires=network-online.target ++ ++[Service] ++ExecStart=/usr/bin/vector -c /etc/vector/vector.toml ++ExecReload=/usr/bin/vector -c /etc/vector/vector.toml validate ++ExecReload=/bin/kill -HUP $MAINPID ++Restart=no ++AmbientCapabilities=CAP_NET_BIND_SERVICE ++EnvironmentFile=-/etc/default/vector ++ ++[Install] ++WantedBy=multi-user.target + +changed: [centos7] +--- before ++++ after: /home/.ansible/tmp/ansible-local-38634vd4fbr1r/tmpx5rqhlvm/vector.service.j2 +@@ -0,0 +1,16 @@ ++[Unit] ++Description=Vector ++Documentation=https://vector.dev ++After=network-online.target ++Requires=network-online.target ++ ++[Service] ++ExecStart=/usr/bin/vector -c /etc/vector/vector.toml ++ExecReload=/usr/bin/vector -c /etc/vector/vector.toml validate ++ExecReload=/bin/kill -HUP $MAINPID ++Restart=no ++AmbientCapabilities=CAP_NET_BIND_SERVICE ++EnvironmentFile=-/etc/default/vector ++ ++[Install] ++WantedBy=multi-user.target + +changed: [centos8] +--- before ++++ after: /home/.ansible/tmp/ansible-local-38634vd4fbr1r/tmpyx13qio2/vector.service.j2 +@@ -0,0 +1,16 @@ ++[Unit] ++Description=Vector ++Documentation=https://vector.dev ++After=network-online.target ++Requires=network-online.target ++ ++[Service] ++ExecStart=/usr/bin/vector -c /etc/vector/vector.toml ++ExecReload=/usr/bin/vector -c /etc/vector/vector.toml validate ++ExecReload=/bin/kill -HUP $MAINPID ++Restart=no ++AmbientCapabilities=CAP_NET_BIND_SERVICE ++EnvironmentFile=-/etc/default/vector ++ ++[Install] ++WantedBy=multi-user.target + +changed: [ubuntu] + +TASK [vector-role : Enable vector service] ************************************* +changed: [ubuntu] +changed: [centos8] +changed: [centos7] + +RUNNING HANDLER [vector-role : Start vector service] *************************** +changed: [centos8] +changed: [ubuntu] +changed: [centos7] + +PLAY RECAP ********************************************************************* +centos7 : ok=18 changed=13 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 +centos8 : ok=19 changed=13 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 +ubuntu : ok=19 changed=13 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 + +INFO Running default > idempotence + +PLAY [Converge] **************************************************************** + +TASK [Gathering Facts] ********************************************************* +ok: [centos8] +ok: [ubuntu] +ok: [centos7] + +TASK [Detect python3 version] ************************************************** +ok: [ubuntu] +ok: [centos8] +ok: [centos7] + +TASK [Set python version of script to 3] *************************************** +skipping: [centos7] +ok: [centos8] +ok: [ubuntu] + +TASK [Download SystemD replacer v3] ******************************************** +skipping: [centos7] +ok: [ubuntu] +ok: [centos8] + +TASK [Download SystemD replacer v2] ******************************************** +skipping: [centos8] +skipping: [ubuntu] +ok: [centos7] + +TASK [Create systemd directories] ********************************************** +ok: [centos7] => (item=/run/systemd/system) +ok: [centos8] => (item=/run/systemd/system) +ok: [ubuntu] => (item=/run/systemd/system) +ok: [centos7] => (item=/usr/lib/systemd/system) +ok: [ubuntu] => (item=/usr/lib/systemd/system) +ok: [centos8] => (item=/usr/lib/systemd/system) + +TASK [Replace systemctl] ******************************************************* +ok: [centos7] +ok: [ubuntu] +ok: [centos8] + +TASK [ReCollect system info] *************************************************** +ok: [centos8] +ok: [ubuntu] +ok: [centos7] + +TASK [Get Clickhouse IP from docker engine] ************************************ +ok: [centos7 -> localhost] +ok: [centos8 -> localhost] +ok: [ubuntu -> localhost] + +TASK [Set Clickhouse IP to facts] ********************************************** +ok: [centos7] +ok: [centos8] +ok: [ubuntu] + +TASK [Include vector-role] ***************************************************** + +TASK [vector-role : Download distrib] ****************************************** +ok: [centos7] +ok: [ubuntu] +ok: [centos8] + +TASK [vector-role : Create distrib directory] ********************************** +ok: [centos7] +ok: [centos8] +ok: [ubuntu] + +TASK [vector-role : Unpack vector distrib by unarchive] ************************ +ok: [ubuntu] +ok: [centos8] +ok: [centos7] + +TASK [vector-role : Install vector executable] ********************************* +ok: [centos7] +ok: [ubuntu] +ok: [centos8] + +TASK [vector-role : Create vector directories] ********************************* +ok: [centos7] => (item=/var/lib/vector) +ok: [ubuntu] => (item=/var/lib/vector) +ok: [centos8] => (item=/var/lib/vector) +ok: [centos7] => (item=/etc/vector) +ok: [centos8] => (item=/etc/vector) +ok: [ubuntu] => (item=/etc/vector) + +TASK [vector-role : Create test directory] ************************************* +ok: [centos7] +ok: [centos8] +ok: [ubuntu] + +TASK [vector-role : Install vector configuration] ****************************** +ok: [centos7] +ok: [centos8] +ok: [ubuntu] + +TASK [vector-role : Install vector service file] ******************************* +ok: [centos7] +ok: [centos8] +ok: [ubuntu] + +TASK [vector-role : Enable vector service] ************************************* +ok: [ubuntu] +ok: [centos8] +ok: [centos7] + +PLAY RECAP ********************************************************************* +centos7 : ok=17 changed=0 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 +centos8 : ok=18 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 +ubuntu : ok=18 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 + +INFO Idempotence completed successfully. +INFO Running default > side_effect +WARNING Skipping, side effect playbook not configured. +INFO Running default > verify +INFO Running Ansible Verifier + +PLAY [Verify vector installation] ********************************************** + +TASK [Get information about vector] ******************************************** +ok: [centos8] +ok: [ubuntu] +ok: [centos7] + +TASK [Assert vector service] *************************************************** +ok: [centos7] => { + "changed": false, + "msg": "All assertions passed" +} +ok: [centos8] => { + "changed": false, + "msg": "All assertions passed" +} +ok: [ubuntu] => { + "changed": false, + "msg": "All assertions passed" +} + +TASK [Validate vector configuration] ******************************************* +ok: [centos8] +ok: [ubuntu] +ok: [centos7] + +TASK [Assert vector healthcheck] *********************************************** +ok: [centos7] => { + "changed": false, + "msg": "All assertions passed" +} +ok: [centos8] => { + "changed": false, + "msg": "All assertions passed" +} +ok: [ubuntu] => { + "changed": false, + "msg": "All assertions passed" +} + +PLAY RECAP ********************************************************************* +centos7 : ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 +centos8 : ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 +ubuntu : ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 + +INFO Verifier completed successfully. +INFO Running default > cleanup +WARNING Skipping, cleanup playbook not configured. +INFO Running default > destroy + +PLAY [Destroy] ***************************************************************** + +TASK [Destroy molecule instance(s)] ******************************************** +changed: [localhost] => (item=centos7) +changed: [localhost] => (item=centos8) +changed: [localhost] => (item=ubuntu) + +TASK [Wait for instance(s) deletion to complete] ******************************* +FAILED - RETRYING: [localhost]: Wait for instance(s) deletion to complete (300 retries left). +changed: [localhost] => (item=centos7) +changed: [localhost] => (item=centos8) +changed: [localhost] => (item=ubuntu) + +TASK [Delete docker networks(s)] *********************************************** + +PLAY RECAP ********************************************************************* +localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 + +INFO Pruning extra files from scenario ephemeral directory +root@ubuntu:~/vector-role$ +``` + + +--- + +### 5. Добавьте новый тег на коммит с рабочим сценарием в соответствии с семантическим версионированием. + +Для успешного тестирования роли был необходим функционирующий экземпляр **Clickhouse**, для чего выполнялось его расзворачивание в отдельном контейнере **docker** + +Файл [манифеста](clickhouse/docker-compose.yml) **Docker compose** + +```yaml +--- +version: "2.4" +services: + clickhouse_vm: + image: pycontribs/centos:7 + container_name: clickhouse-01 + ports: + - "9000:9000" + - "8123:8123" + tty: true + network_mode: bridge +... +``` + +После создания контейнера, на нём применена роль [Ansible-clickhouse](https://github.com/NamorNinayzuk/ansible-clickhouse). + +--- + +## Tox + +> 1. Добавьте в директорию с vector-role файлы из [директории](./example) +> 1. Запустите `docker run --privileged=True -v :/opt/vector-role -w /opt/vector-role -it aragast/netology:latest /bin/bash`, где path_to_repo - путь до корня репозитория с vector-role на вашей файловой системе. +> 1. Внутри контейнера выполните команду `tox`, посмотрите на вывод. +> 1. Создайте облегчённый сценарий для `molecule` с драйвером `molecule_podman`. Проверьте его на исполнимость. +> 1. Пропишите правильную команду в `tox.ini` для того чтобы запускался облегчённый сценарий. +> 1. Запустите команду `tox`. Убедитесь, что всё отработало успешно. +> 1. Добавьте новый тег на коммит с рабочим сценарием в соответствии с семантическим версионированием. +> +> После выполнения у вас должно получится два сценария molecule и один tox.ini файл в репозитории. +> Ссылка на репозиторий являются ответами на домашнее задание. +> Не забудьте указать в ответе теги решений Tox и Molecule заданий. +Используемое окружение: + +```console +root@ubuntu:~/vector-role$ molecule --version +molecule 4.0.1 using python 3.10 + ansible:2.13.3 + delegated:4.0.1 from molecule + docker:2.0.0 from molecule_docker requiring collections: community.docker>=3.0.0-a2 + podman:2.0.2 from molecule_podman requiring collections: containers.podman>=1.7.0 ansible.posix>=1.3.0 +root@ubuntu:~/vector-role$ +``` + +Чтобы не множить остановленные после завершения работы **Docker** контейнеры в строку запуска добавлен параметр `--rm` + +Первый запуск **Tox** внутри контейнера: + +```console +root@ubuntu:~/vector-role$ docker run --privileged=True --rm -v ~/vector-role:/opt/vector-role -w /opt/vector-role -it aragast/netology:latest /bin/bash +[root@f8b40ec440dc vector-role]# tox +py37-ansible210 create: /opt/vector-role/.tox/py37-ansible210 +py37-ansible210 installdeps: -rtox-requirements.txt, ansible<3.0 +py37-ansible210 installed: ansible==2.10.7,ansible-base==2.10.17,ansible-compat==1.0.0,ansible-lint==5.1.3,arrow==1.2.3,bcrypt==4.0.0,binaryornot==0.4.4,bracex==2.3.post1,cached-property==1.5.2,Cerberus==1.3.2,certifi==2022.6.15.1,cffi==1.15.1,chardet==5.0.0,charset-normalizer==2.1.1,click==8.1.3,click-help-colors==0.9.1,commonmark==0.9.1,cookiecutter==2.1.1,cryptography==38.0.1,distro==1.7.0,enrich==1.2.7,idna==3.3,importlib-metadata==4.12.0,Jinja2==3.1.2,jinja2-time==0.2.0,jmespath==1.0.1,lxml==4.9.1,MarkupSafe==2.1.1,molecule==3.4.0,molecule-podman==1.0.1,packaging==21.3,paramiko==2.11.0,pathspec==0.10.1,pluggy==0.13.1,pycparser==2.21,Pygments==2.13.0,PyNaCl==1.5.0,pyparsing==3.0.9,python-dateutil==2.8.2,python-slugify==6.1.2,PyYAML==5.4.1,requests==2.28.1,rich==12.5.1,ruamel.yaml==0.17.21,ruamel.yaml.clib==0.2.6,selinux==0.2.1,six==1.16.0,subprocess-tee==0.3.5,tenacity==8.0.1,text-unidecode==1.3,typing_extensions==4.3.0,urllib3==1.26.12,wcmatch==8.4,yamllint==1.26.3,zipp==3.8.1 +py37-ansible210 run-test-pre: PYTHONHASHSEED='1189971644' +py37-ansible210 run-test: commands[0] | molecule test -s compatibility --destroy always +CRITICAL 'molecule/compatibility/molecule.yml' glob failed. Exiting. +ERROR: InvocationError for command /opt/vector-role/.tox/py37-ansible210/bin/molecule test -s compatibility --destroy always (exited with code 1) +py37-ansible30 create: /opt/vector-role/.tox/py37-ansible30 +py37-ansible30 installdeps: -rtox-requirements.txt, ansible<3.1 +py37-ansible30 installed: ansible==3.0.0,ansible-base==2.10.17,ansible-compat==1.0.0,ansible-lint==5.1.3,arrow==1.2.3,bcrypt==4.0.0,binaryornot==0.4.4,bracex==2.3.post1,cached-property==1.5.2,Cerberus==1.3.2,certifi==2022.6.15.1,cffi==1.15.1,chardet==5.0.0,charset-normalizer==2.1.1,click==8.1.3,click-help-colors==0.9.1,commonmark==0.9.1,cookiecutter==2.1.1,cryptography==38.0.1,distro==1.7.0,enrich==1.2.7,idna==3.3,importlib-metadata==4.12.0,Jinja2==3.1.2,jinja2-time==0.2.0,jmespath==1.0.1,lxml==4.9.1,MarkupSafe==2.1.1,molecule==3.4.0,molecule-podman==1.0.1,packaging==21.3,paramiko==2.11.0,pathspec==0.10.1,pluggy==0.13.1,pycparser==2.21,Pygments==2.13.0,PyNaCl==1.5.0,pyparsing==3.0.9,python-dateutil==2.8.2,python-slugify==6.1.2,PyYAML==5.4.1,requests==2.28.1,rich==12.5.1,ruamel.yaml==0.17.21,ruamel.yaml.clib==0.2.6,selinux==0.2.1,six==1.16.0,subprocess-tee==0.3.5,tenacity==8.0.1,text-unidecode==1.3,typing_extensions==4.3.0,urllib3==1.26.12,wcmatch==8.4,yamllint==1.26.3,zipp==3.8.1 +py37-ansible30 run-test-pre: PYTHONHASHSEED='1189971644' +py37-ansible30 run-test: commands[0] | molecule test -s compatibility --destroy always +CRITICAL 'molecule/compatibility/molecule.yml' glob failed. Exiting. +ERROR: InvocationError for command /opt/vector-role/.tox/py37-ansible30/bin/molecule test -s compatibility --destroy always (exited with code 1) +py39-ansible210 create: /opt/vector-role/.tox/py39-ansible210 +py39-ansible210 installdeps: -rtox-requirements.txt, ansible<3.0 +py39-ansible210 installed: ansible==2.10.7,ansible-base==2.10.17,ansible-compat==2.2.0,ansible-lint==5.1.3,arrow==1.2.3,attrs==22.1.0,bcrypt==4.0.0,binaryornot==0.4.4,bracex==2.3.post1,Cerberus==1.3.2,certifi==2022.6.15.1,cffi==1.15.1,chardet==5.0.0,charset-normalizer==2.1.1,click==8.1.3,click-help-colors==0.9.1,commonmark==0.9.1,cookiecutter==2.1.1,cryptography==38.0.1,distro==1.7.0,enrich==1.2.7,idna==3.3,Jinja2==3.1.2,jinja2-time==0.2.0,jmespath==1.0.1,jsonschema==4.16.0,lxml==4.9.1,MarkupSafe==2.1.1,molecule==3.4.0,molecule-podman==1.0.1,packaging==21.3,paramiko==2.11.0,pathspec==0.10.1,pluggy==0.13.1,pycparser==2.21,Pygments==2.13.0,PyNaCl==1.5.0,pyparsing==3.0.9,pyrsistent==0.18.1,python-dateutil==2.8.2,python-slugify==6.1.2,PyYAML==5.4.1,requests==2.28.1,rich==12.5.1,ruamel.yaml==0.17.21,ruamel.yaml.clib==0.2.6,selinux==0.2.1,six==1.16.0,subprocess-tee==0.3.5,tenacity==8.0.1,text-unidecode==1.3,urllib3==1.26.12,wcmatch==8.4,yamllint==1.26.3 +py39-ansible210 run-test-pre: PYTHONHASHSEED='1189971644' +py39-ansible210 run-test: commands[0] | molecule test -s compatibility --destroy always +CRITICAL 'molecule/compatibility/molecule.yml' glob failed. Exiting. +ERROR: InvocationError for command /opt/vector-role/.tox/py39-ansible210/bin/molecule test -s compatibility --destroy always (exited with code 1) +py39-ansible30 create: /opt/vector-role/.tox/py39-ansible30 +py39-ansible30 installdeps: -rtox-requirements.txt, ansible<3.1 +py39-ansible30 installed: ansible==3.0.0,ansible-base==2.10.17,ansible-compat==2.2.0,ansible-lint==5.1.3,arrow==1.2.3,attrs==22.1.0,bcrypt==4.0.0,binaryornot==0.4.4,bracex==2.3.post1,Cerberus==1.3.2,certifi==2022.6.15.1,cffi==1.15.1,chardet==5.0.0,charset-normalizer==2.1.1,click==8.1.3,click-help-colors==0.9.1,commonmark==0.9.1,cookiecutter==2.1.1,cryptography==38.0.1,distro==1.7.0,enrich==1.2.7,idna==3.3,Jinja2==3.1.2,jinja2-time==0.2.0,jmespath==1.0.1,jsonschema==4.16.0,lxml==4.9.1,MarkupSafe==2.1.1,molecule==3.4.0,molecule-podman==1.0.1,packaging==21.3,paramiko==2.11.0,pathspec==0.10.1,pluggy==0.13.1,pycparser==2.21,Pygments==2.13.0,PyNaCl==1.5.0,pyparsing==3.0.9,pyrsistent==0.18.1,python-dateutil==2.8.2,python-slugify==6.1.2,PyYAML==5.4.1,requests==2.28.1,rich==12.5.1,ruamel.yaml==0.17.21,ruamel.yaml.clib==0.2.6,selinux==0.2.1,six==1.16.0,subprocess-tee==0.3.5,tenacity==8.0.1,text-unidecode==1.3,urllib3==1.26.12,wcmatch==8.4,yamllint==1.26.3 +py39-ansible30 run-test-pre: PYTHONHASHSEED='1189971644' +py39-ansible30 run-test: commands[0] | molecule test -s compatibility --destroy always +CRITICAL 'molecule/compatibility/molecule.yml' glob failed. Exiting. +ERROR: InvocationError for command /opt/vector-role/.tox/py39-ansible30/bin/molecule test -s compatibility --destroy always (exited with code 1) +_______________________________________________________ summary ________________________________________________________ +ERROR: py37-ansible210: commands failed +ERROR: py37-ansible30: commands failed +ERROR: py39-ansible210: commands failed +ERROR: py39-ansible30: commands failed +[root@f8b40ec440dc vector-role]# +``` + +> Так как **Docker** контейнер запускался с проброшеным внутрь каталогом роли (`-v ~/vector-role:/opt/vector-role`), +> то выполнение **Tox** создало кэш используемых окружений вне контейнера (каталог `~/vector-role/.tox`), +> что позволяет повторно их использовать даже после удаления и пересоздания контейнера (поэтому использование параметра `--rm` вполне оправдано) +Из сообщений видно, что все проверки **Tox** завершились одинаковой ошибкой, связанной с исполняемой командой, +а именно: `molecule test -s compatibility --destroy always`, что логично, потому что сценария `compatibility` в роли нет. + +Инициализация нового сценария с драйвером **podman** + +```console +root@ubuntu:~/vector-role$ molecule init scenario podman --driver-name podman +INFO Initializing new scenario podman... +INFO Initialized scenario in /home/vector-role/molecule/podman successfully. +root@ubuntu:~/vector-role$ +``` + +Вывод работы Tox: + +```console +[root@a906a0f6163d vector-role]# tox +py37-ansible210 installed: ansible==2.10.7,ansible-base==2.10.17,ansible-compat==1.0.0,ansible-lint==5.1.3,arrow==1.2.3,bcrypt==4.0.0,binaryornot==0.4.4,bracex==2.3.post1,cached-property==1.5.2,Cerberus==1.3.2,certifi==2022.6.15.1,cffi==1.15.1,chardet==5.0.0,charset-normalizer==2.1.1,click==8.1.3,click-help-colors==0.9.1,commonmark==0.9.1,cookiecutter==2.1.1,cryptography==38.0.1,distro==1.7.0,enrich==1.2.7,idna==3.3,importlib-metadata==4.12.0,Jinja2==3.1.2,jinja2-time==0.2.0,jmespath==1.0.1,lxml==4.9.1,MarkupSafe==2.1.1,molecule==3.4.0,molecule-podman==1.0.1,packaging==21.3,paramiko==2.11.0,pathspec==0.10.1,pluggy==0.13.1,pycparser==2.21,Pygments==2.13.0,PyNaCl==1.5.0,pyparsing==3.0.9,python-dateutil==2.8.2,python-slugify==6.1.2,PyYAML==5.4.1,requests==2.28.1,rich==12.5.1,ruamel.yaml==0.17.21,ruamel.yaml.clib==0.2.6,selinux==0.2.1,six==1.16.0,subprocess-tee==0.3.5,tenacity==8.0.1,text-unidecode==1.3,typing_extensions==4.3.0,urllib3==1.26.12,wcmatch==8.4,yamllint==1.26.3,zipp==3.8.1 +py37-ansible210 run-test-pre: PYTHONHASHSEED='2945318393' +py37-ansible210 run-test: commands[0] | molecule test -s podman --destroy always +INFO podman scenario test matrix: dependency, lint, cleanup, destroy, syntax, create, prepare, converge, idempotence, side_effect, verify, cleanup, destroy +INFO Performing prerun... +WARNING Failed to locate command: [Errno 2] No such file or directory: 'git': 'git' +INFO Guessed /opt/vector-role as project root directory +INFO Using /root/.cache/ansible-lint/b984a4/roles/artem_shtepa.vector_role symlink to current repository in order to enable Ansible to find the role using its expected full name. +INFO Added ANSIBLE_ROLES_PATH=~/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles:/root/.cache/ansible-lint/b984a4/roles +INFO Running podman > dependency +WARNING Skipping, missing the requirements file. +WARNING Skipping, missing the requirements file. +INFO Running podman > lint +INFO Lint is disabled. +INFO Running podman > cleanup +WARNING Skipping, cleanup playbook not configured. +INFO Running podman > destroy +INFO Sanity checks: 'podman' + +PLAY [Destroy] ***************************************************************** + +TASK [Destroy molecule instance(s)] ******************************************** +changed: [localhost] => (item={'image': 'docker.io/pycontribs/centos:7', 'name': 'centos7', 'pre_build_image': True}) + +TASK [Wait for instance(s) deletion to complete] ******************************* +changed: [localhost] => (item={'started': 1, 'finished': 0, 'ansible_job_id': '269176633897.3284', 'results_file': '/root/.ansible_async/269176633897.3284', 'changed': True, 'failed': False, 'item': {'image': 'docker.io/pycontribs/centos:7', 'name': 'centos7', 'pre_build_image': True}, 'ansible_loop_var': 'item'}) + +PLAY RECAP ********************************************************************* +localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 + +INFO Running podman > syntax + +playbook: /opt/vector-role/molecule/resources/converge.yml +INFO Running podman > create + +PLAY [Create] ****************************************************************** + +TASK [get podman executable path] ********************************************** +ok: [localhost] + +TASK [save path to executable as fact] ***************************************** +ok: [localhost] + +TASK [Log into a container registry] ******************************************* +skipping: [localhost] => (item="centos7 registry username: None specified") + +TASK [Check presence of custom Dockerfiles] ************************************ +ok: [localhost] => (item=Dockerfile: None specified) + +TASK [Create Dockerfiles from image names] ************************************* +skipping: [localhost] => (item="Dockerfile: None specified; Image: docker.io/pycontribs/centos:7") + +TASK [Discover local Podman images] ******************************************** +ok: [localhost] => (item=centos7) + +TASK [Build an Ansible compatible image] *************************************** +skipping: [localhost] => (item=docker.io/pycontribs/centos:7) + +TASK [Determine the CMD directives] ******************************************** +ok: [localhost] => (item="centos7 command: None specified") + +TASK [Remove possible pre-existing containers] ********************************* +changed: [localhost] + +TASK [Discover local podman networks] ****************************************** +skipping: [localhost] => (item=centos7: None specified) + +TASK [Create podman network dedicated to this scenario] ************************ +skipping: [localhost] + +TASK [Create molecule instance(s)] ********************************************* +changed: [localhost] => (item=centos7) + +TASK [Wait for instance(s) creation to complete] ******************************* +failed: [localhost] (item=centos7) => {"ansible_job_id": "864829238242.3454", "ansible_loop_var": "item", "attempts": 1, "changed": true, "cmd": ["/usr/bin/podman", "run", "-d", "--name", "centos7", "--hostname=centos7", "docker.io/pycontribs/centos:7", "bash", "-c", "while true; do sleep 10000; done"], "delta": "0:00:00.047901", "end": "2022-09-11 11:34:41.767148", "finished": 1, "item": {"ansible_job_id": "864829238242.3454", "ansible_loop_var": "item", "changed": true, "failed": false, "finished": 0, "item": {"image": "docker.io/pycontribs/centos:7", "name": "centos7", "pre_build_image": true}, "results_file": "/root/.ansible_async/864829238242.3454", "started": 1}, "msg": "non-zero return code", "rc": 125, "start": "2022-09-11 11:34:41.719247", "stderr": "Error: invalid config provided: cannot set hostname when running in the host UTS namespace: invalid configuration", "stderr_lines": ["Error: invalid config provided: cannot set hostname when running in the host UTS namespace: invalid configuration"], "stdout": "", "stdout_lines": []} + +PLAY RECAP ********************************************************************* +localhost : ok=7 changed=2 unreachable=0 failed=1 skipped=5 rescued=0 ignored=0 + +CRITICAL Ansible return code was 2, command was: ['ansible-playbook', '--inventory', '/root/.cache/molecule/vector-role/podman/inventory', '--skip-tags', 'molecule-notest,notest', '/opt/vector-role/.tox/py37-ansible210/lib/python3.7/site-packages/molecule_podman/playbooks/create.yml'] +WARNING An error occurred during the test sequence action: 'create'. Cleaning up. +INFO Running podman > cleanup +WARNING Skipping, cleanup playbook not configured. +INFO Running podman > destroy + +PLAY [Destroy] ***************************************************************** + +TASK [Destroy molecule instance(s)] ******************************************** +changed: [localhost] => (item={'image': 'docker.io/pycontribs/centos:7', 'name': 'centos7', 'pre_build_image': True}) + +TASK [Wait for instance(s) deletion to complete] ******************************* +changed: [localhost] => (item={'started': 1, 'finished': 0, 'ansible_job_id': '824208785878.3511', 'results_file': '/root/.ansible_async/824208785878.3511', 'changed': True, 'failed': False, 'item': {'image': 'docker.io/pycontribs/centos:7', 'name': 'centos7', 'pre_build_image': True}, 'ansible_loop_var': 'item'}) + +PLAY RECAP ********************************************************************* +localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 + +INFO Pruning extra files from scenario ephemeral directory +ERROR: InvocationError for command /opt/vector-role/.tox/py37-ansible210/bin/molecule test -s podman --destroy always (exited with code 1) +py37-ansible30 installed: ansible==3.0.0,ansible-base==2.10.17,ansible-compat==1.0.0,ansible-lint==5.1.3,arrow==1.2.3,bcrypt==4.0.0,binaryornot==0.4.4,bracex==2.3.post1,cached-property==1.5.2,Cerberus==1.3.2,certifi==2022.6.15.1,cffi==1.15.1,chardet==5.0.0,charset-normalizer==2.1.1,click==8.1.3,click-help-colors==0.9.1,commonmark==0.9.1,cookiecutter==2.1.1,cryptography==38.0.1,distro==1.7.0,enrich==1.2.7,idna==3.3,importlib-metadata==4.12.0,Jinja2==3.1.2,jinja2-time==0.2.0,jmespath==1.0.1,lxml==4.9.1,MarkupSafe==2.1.1,molecule==3.4.0,molecule-podman==1.0.1,packaging==21.3,paramiko==2.11.0,pathspec==0.10.1,pluggy==0.13.1,pycparser==2.21,Pygments==2.13.0,PyNaCl==1.5.0,pyparsing==3.0.9,python-dateutil==2.8.2,python-slugify==6.1.2,PyYAML==5.4.1,requests==2.28.1,rich==12.5.1,ruamel.yaml==0.17.21,ruamel.yaml.clib==0.2.6,selinux==0.2.1,six==1.16.0,subprocess-tee==0.3.5,tenacity==8.0.1,text-unidecode==1.3,typing_extensions==4.3.0,urllib3==1.26.12,wcmatch==8.4,yamllint==1.26.3,zipp==3.8.1 +py37-ansible30 run-test-pre: PYTHONHASHSEED='2945318393' +py37-ansible30 run-test: commands[0] | molecule test -s podman --destroy always +INFO podman scenario test matrix: dependency, lint, cleanup, destroy, syntax, create, prepare, converge, idempotence, side_effect, verify, cleanup, destroy +INFO Performing prerun... +WARNING Failed to locate command: [Errno 2] No such file or directory: 'git': 'git' +INFO Guessed /opt/vector-role as project root directory +INFO Using /root/.cache/ansible-lint/b984a4/roles/artem_shtepa.vector_role symlink to current repository in order to enable Ansible to find the role using its expected full name. +INFO Added ANSIBLE_ROLES_PATH=~/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles:/root/.cache/ansible-lint/b984a4/roles +INFO Running podman > dependency +WARNING Skipping, missing the requirements file. +WARNING Skipping, missing the requirements file. +INFO Running podman > lint +INFO Lint is disabled. +INFO Running podman > cleanup +WARNING Skipping, cleanup playbook not configured. +INFO Running podman > destroy +INFO Sanity checks: 'podman' + +PLAY [Destroy] ***************************************************************** + +TASK [Destroy molecule instance(s)] ******************************************** +changed: [localhost] => (item={'image': 'docker.io/pycontribs/centos:7', 'name': 'centos7', 'pre_build_image': True}) + +TASK [Wait for instance(s) deletion to complete] ******************************* +changed: [localhost] => (item={'started': 1, 'finished': 0, 'ansible_job_id': '266779808633.3605', 'results_file': '/root/.ansible_async/266779808633.3605', 'changed': True, 'failed': False, 'item': {'image': 'docker.io/pycontribs/centos:7', 'name': 'centos7', 'pre_build_image': True}, 'ansible_loop_var': 'item'}) + +PLAY RECAP ********************************************************************* +localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 + +INFO Running podman > syntax + +playbook: /opt/vector-role/molecule/resources/converge.yml +INFO Running podman > create + +PLAY [Create] ****************************************************************** + +TASK [get podman executable path] ********************************************** +ok: [localhost] + +TASK [save path to executable as fact] ***************************************** +ok: [localhost] + +TASK [Log into a container registry] ******************************************* +skipping: [localhost] => (item="centos7 registry username: None specified") + +TASK [Check presence of custom Dockerfiles] ************************************ +ok: [localhost] => (item=Dockerfile: None specified) + +TASK [Create Dockerfiles from image names] ************************************* +skipping: [localhost] => (item="Dockerfile: None specified; Image: docker.io/pycontribs/centos:7") + +TASK [Discover local Podman images] ******************************************** +ok: [localhost] => (item=centos7) + +TASK [Build an Ansible compatible image] *************************************** +skipping: [localhost] => (item=docker.io/pycontribs/centos:7) + +TASK [Determine the CMD directives] ******************************************** +ok: [localhost] => (item="centos7 command: None specified") + +TASK [Remove possible pre-existing containers] ********************************* +changed: [localhost] + +TASK [Discover local podman networks] ****************************************** +skipping: [localhost] => (item=centos7: None specified) + +TASK [Create podman network dedicated to this scenario] ************************ +skipping: [localhost] + +TASK [Create molecule instance(s)] ********************************************* +changed: [localhost] => (item=centos7) + +TASK [Wait for instance(s) creation to complete] ******************************* +failed: [localhost] (item=centos7) => {"ansible_job_id": "386413707022.3774", "ansible_loop_var": "item", "attempts": 1, "changed": true, "cmd": ["/usr/bin/podman", "run", "-d", "--name", "centos7", "--hostname=centos7", "docker.io/pycontribs/centos:7", "bash", "-c", "while true; do sleep 10000; done"], "delta": "0:00:00.043561", "end": "2022-09-11 11:34:55.340619", "finished": 1, "item": {"ansible_job_id": "386413707022.3774", "ansible_loop_var": "item", "changed": true, "failed": false, "finished": 0, "item": {"image": "docker.io/pycontribs/centos:7", "name": "centos7", "pre_build_image": true}, "results_file": "/root/.ansible_async/386413707022.3774", "started": 1}, "msg": "non-zero return code", "rc": 125, "start": "2022-09-11 11:34:55.297058", "stderr": "Error: invalid config provided: cannot set hostname when running in the host UTS namespace: invalid configuration", "stderr_lines": ["Error: invalid config provided: cannot set hostname when running in the host UTS namespace: invalid configuration"], "stdout": "", "stdout_lines": []} + +PLAY RECAP ********************************************************************* +localhost : ok=7 changed=2 unreachable=0 failed=1 skipped=5 rescued=0 ignored=0 + +CRITICAL Ansible return code was 2, command was: ['ansible-playbook', '--inventory', '/root/.cache/molecule/vector-role/podman/inventory', '--skip-tags', 'molecule-notest,notest', '/opt/vector-role/.tox/py37-ansible30/lib/python3.7/site-packages/molecule_podman/playbooks/create.yml'] +WARNING An error occurred during the test sequence action: 'create'. Cleaning up. +INFO Running podman > cleanup +WARNING Skipping, cleanup playbook not configured. +INFO Running podman > destroy + +PLAY [Destroy] ***************************************************************** + +TASK [Destroy molecule instance(s)] ******************************************** +changed: [localhost] => (item={'image': 'docker.io/pycontribs/centos:7', 'name': 'centos7', 'pre_build_image': True}) + +TASK [Wait for instance(s) deletion to complete] ******************************* +changed: [localhost] => (item={'started': 1, 'finished': 0, 'ansible_job_id': '931728167635.3832', 'results_file': '/root/.ansible_async/931728167635.3832', 'changed': True, 'failed': False, 'item': {'image': 'docker.io/pycontribs/centos:7', 'name': 'centos7', 'pre_build_image': True}, 'ansible_loop_var': 'item'}) + +PLAY RECAP ********************************************************************* +localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 + +INFO Pruning extra files from scenario ephemeral directory +ERROR: InvocationError for command /opt/vector-role/.tox/py37-ansible30/bin/molecule test -s podman --destroy always (exited with code 1) +py39-ansible210 installed: ansible==2.10.7,ansible-base==2.10.17,ansible-compat==2.2.0,ansible-lint==5.1.3,arrow==1.2.3,attrs==22.1.0,bcrypt==4.0.0,binaryornot==0.4.4,bracex==2.3.post1,Cerberus==1.3.2,certifi==2022.6.15.1,cffi==1.15.1,chardet==5.0.0,charset-normalizer==2.1.1,click==8.1.3,click-help-colors==0.9.1,commonmark==0.9.1,cookiecutter==2.1.1,cryptography==38.0.1,distro==1.7.0,enrich==1.2.7,idna==3.3,Jinja2==3.1.2,jinja2-time==0.2.0,jmespath==1.0.1,jsonschema==4.16.0,lxml==4.9.1,MarkupSafe==2.1.1,molecule==3.4.0,molecule-podman==1.0.1,packaging==21.3,paramiko==2.11.0,pathspec==0.10.1,pluggy==0.13.1,pycparser==2.21,Pygments==2.13.0,PyNaCl==1.5.0,pyparsing==3.0.9,pyrsistent==0.18.1,python-dateutil==2.8.2,python-slugify==6.1.2,PyYAML==5.4.1,requests==2.28.1,rich==12.5.1,ruamel.yaml==0.17.21,ruamel.yaml.clib==0.2.6,selinux==0.2.1,six==1.16.0,subprocess-tee==0.3.5,tenacity==8.0.1,text-unidecode==1.3,urllib3==1.26.12,wcmatch==8.4,yamllint==1.26.3 +py39-ansible210 run-test-pre: PYTHONHASHSEED='2945318393' +py39-ansible210 run-test: commands[0] | molecule test -s podman --destroy always +INFO podman scenario test matrix: dependency, lint, cleanup, destroy, syntax, create, prepare, converge, idempotence, side_effect, verify, cleanup, destroy +INFO Performing prerun... +WARNING Failed to locate command: [Errno 2] No such file or directory: 'git' +INFO Guessed /opt/vector-role as project root directory +INFO Using /root/.cache/ansible-lint/b984a4/roles/artem_shtepa.vector_role symlink to current repository in order to enable Ansible to find the role using its expected full name. +INFO Added ANSIBLE_ROLES_PATH=~/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles:/root/.cache/ansible-lint/b984a4/roles +INFO Running podman > dependency +WARNING Skipping, missing the requirements file. +WARNING Skipping, missing the requirements file. +INFO Running podman > lint +INFO Lint is disabled. +INFO Running podman > cleanup +WARNING Skipping, cleanup playbook not configured. +INFO Running podman > destroy +INFO Sanity checks: 'podman' + +PLAY [Destroy] ***************************************************************** + +TASK [Destroy molecule instance(s)] ******************************************** +changed: [localhost] => (item={'image': 'docker.io/pycontribs/centos:7', 'name': 'centos7', 'pre_build_image': True}) + +TASK [Wait for instance(s) deletion to complete] ******************************* +changed: [localhost] => (item={'started': 1, 'finished': 0, 'ansible_job_id': '99216622436.3900', 'results_file': '/root/.ansible_async/99216622436.3900', 'changed': True, 'failed': False, 'item': {'image': 'docker.io/pycontribs/centos:7', 'name': 'centos7', 'pre_build_image': True}, 'ansible_loop_var': 'item'}) + +PLAY RECAP ********************************************************************* +localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 + +INFO Running podman > syntax + +playbook: /opt/vector-role/molecule/resources/converge.yml +INFO Running podman > create + +PLAY [Create] ****************************************************************** + +TASK [get podman executable path] ********************************************** +ok: [localhost] + +TASK [save path to executable as fact] ***************************************** +ok: [localhost] + +TASK [Log into a container registry] ******************************************* +skipping: [localhost] => (item="centos7 registry username: None specified") + +TASK [Check presence of custom Dockerfiles] ************************************ +ok: [localhost] => (item=Dockerfile: None specified) + +TASK [Create Dockerfiles from image names] ************************************* +skipping: [localhost] => (item="Dockerfile: None specified; Image: docker.io/pycontribs/centos:7") + +TASK [Discover local Podman images] ******************************************** +ok: [localhost] => (item=centos7) + +TASK [Build an Ansible compatible image] *************************************** +skipping: [localhost] => (item=docker.io/pycontribs/centos:7) + +TASK [Determine the CMD directives] ******************************************** +ok: [localhost] => (item="centos7 command: None specified") + +TASK [Remove possible pre-existing containers] ********************************* +changed: [localhost] + +TASK [Discover local podman networks] ****************************************** +skipping: [localhost] => (item=centos7: None specified) + +TASK [Create podman network dedicated to this scenario] ************************ +skipping: [localhost] + +TASK [Create molecule instance(s)] ********************************************* +changed: [localhost] => (item=centos7) + +TASK [Wait for instance(s) creation to complete] ******************************* +failed: [localhost] (item=centos7) => {"ansible_job_id": "453345608163.4051", "ansible_loop_var": "item", "attempts": 1, "changed": true, "cmd": ["/usr/bin/podman", "run", "-d", "--name", "centos7", "--hostname=centos7", "docker.io/pycontribs/centos:7", "bash", "-c", "while true; do sleep 10000; done"], "delta": "0:00:00.044988", "end": "2022-09-11 11:35:08.693879", "finished": 1, "item": {"ansible_job_id": "453345608163.4051", "ansible_loop_var": "item", "changed": true, "failed": false, "finished": 0, "item": {"image": "docker.io/pycontribs/centos:7", "name": "centos7", "pre_build_image": true}, "results_file": "/root/.ansible_async/453345608163.4051", "started": 1}, "msg": "non-zero return code", "rc": 125, "start": "2022-09-11 11:35:08.648891", "stderr": "Error: invalid config provided: cannot set hostname when running in the host UTS namespace: invalid configuration", "stderr_lines": ["Error: invalid config provided: cannot set hostname when running in the host UTS namespace: invalid configuration"], "stdout": "", "stdout_lines": []} + +PLAY RECAP ********************************************************************* +localhost : ok=7 changed=2 unreachable=0 failed=1 skipped=5 rescued=0 ignored=0 + +CRITICAL Ansible return code was 2, command was: ['ansible-playbook', '--inventory', '/root/.cache/molecule/vector-role/podman/inventory', '--skip-tags', 'molecule-notest,notest', '/opt/vector-role/.tox/py39-ansible210/lib/python3.9/site-packages/molecule_podman/playbooks/create.yml'] +WARNING An error occurred during the test sequence action: 'create'. Cleaning up. +INFO Running podman > cleanup +WARNING Skipping, cleanup playbook not configured. +INFO Running podman > destroy + +PLAY [Destroy] ***************************************************************** + +TASK [Destroy molecule instance(s)] ******************************************** +changed: [localhost] => (item={'image': 'docker.io/pycontribs/centos:7', 'name': 'centos7', 'pre_build_image': True}) + +TASK [Wait for instance(s) deletion to complete] ******************************* +changed: [localhost] => (item={'started': 1, 'finished': 0, 'ansible_job_id': '173151728599.4100', 'results_file': '/root/.ansible_async/173151728599.4100', 'changed': True, 'failed': False, 'item': {'image': 'docker.io/pycontribs/centos:7', 'name': 'centos7', 'pre_build_image': True}, 'ansible_loop_var': 'item'}) + +PLAY RECAP ********************************************************************* +localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 + +INFO Pruning extra files from scenario ephemeral directory +ERROR: InvocationError for command /opt/vector-role/.tox/py39-ansible210/bin/molecule test -s podman --destroy always (exited with code 1) +py39-ansible30 installed: ansible==3.0.0,ansible-base==2.10.17,ansible-compat==2.2.0,ansible-lint==5.1.3,arrow==1.2.3,attrs==22.1.0,bcrypt==4.0.0,binaryornot==0.4.4,bracex==2.3.post1,Cerberus==1.3.2,certifi==2022.6.15.1,cffi==1.15.1,chardet==5.0.0,charset-normalizer==2.1.1,click==8.1.3,click-help-colors==0.9.1,commonmark==0.9.1,cookiecutter==2.1.1,cryptography==38.0.1,distro==1.7.0,enrich==1.2.7,idna==3.3,Jinja2==3.1.2,jinja2-time==0.2.0,jmespath==1.0.1,jsonschema==4.16.0,lxml==4.9.1,MarkupSafe==2.1.1,molecule==3.4.0,molecule-podman==1.0.1,packaging==21.3,paramiko==2.11.0,pathspec==0.10.1,pluggy==0.13.1,pycparser==2.21,Pygments==2.13.0,PyNaCl==1.5.0,pyparsing==3.0.9,pyrsistent==0.18.1,python-dateutil==2.8.2,python-slugify==6.1.2,PyYAML==5.4.1,requests==2.28.1,rich==12.5.1,ruamel.yaml==0.17.21,ruamel.yaml.clib==0.2.6,selinux==0.2.1,six==1.16.0,subprocess-tee==0.3.5,tenacity==8.0.1,text-unidecode==1.3,urllib3==1.26.12,wcmatch==8.4,yamllint==1.26.3 +py39-ansible30 run-test-pre: PYTHONHASHSEED='2945318393' +py39-ansible30 run-test: commands[0] | molecule test -s podman --destroy always +INFO podman scenario test matrix: dependency, lint, cleanup, destroy, syntax, create, prepare, converge, idempotence, side_effect, verify, cleanup, destroy +INFO Performing prerun... +WARNING Failed to locate command: [Errno 2] No such file or directory: 'git' +INFO Guessed /opt/vector-role as project root directory +INFO Using /root/.cache/ansible-lint/b984a4/roles/artem_shtepa.vector_role symlink to current repository in order to enable Ansible to find the role using its expected full name. +INFO Added ANSIBLE_ROLES_PATH=~/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles:/root/.cache/ansible-lint/b984a4/roles +INFO Running podman > dependency +WARNING Skipping, missing the requirements file. +WARNING Skipping, missing the requirements file. +INFO Running podman > lint +INFO Lint is disabled. +INFO Running podman > cleanup +WARNING Skipping, cleanup playbook not configured. +INFO Running podman > destroy +INFO Sanity checks: 'podman' + +PLAY [Destroy] ***************************************************************** + +TASK [Destroy molecule instance(s)] ******************************************** +changed: [localhost] => (item={'image': 'docker.io/pycontribs/centos:7', 'name': 'centos7', 'pre_build_image': True}) + +TASK [Wait for instance(s) deletion to complete] ******************************* +changed: [localhost] => (item={'started': 1, 'finished': 0, 'ansible_job_id': '331091900457.4161', 'results_file': '/root/.ansible_async/331091900457.4161', 'changed': True, 'failed': False, 'item': {'image': 'docker.io/pycontribs/centos:7', 'name': 'centos7', 'pre_build_image': True}, 'ansible_loop_var': 'item'}) + +PLAY RECAP ********************************************************************* +localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 + +INFO Running podman > syntax + +playbook: /opt/vector-role/molecule/resources/converge.yml +INFO Running podman > create + +PLAY [Create] ****************************************************************** + +TASK [get podman executable path] ********************************************** +ok: [localhost] + +TASK [save path to executable as fact] ***************************************** +ok: [localhost] + +TASK [Log into a container registry] ******************************************* +skipping: [localhost] => (item="centos7 registry username: None specified") + +TASK [Check presence of custom Dockerfiles] ************************************ +ok: [localhost] => (item=Dockerfile: None specified) + +TASK [Create Dockerfiles from image names] ************************************* +skipping: [localhost] => (item="Dockerfile: None specified; Image: docker.io/pycontribs/centos:7") + +TASK [Discover local Podman images] ******************************************** +ok: [localhost] => (item=centos7) + +TASK [Build an Ansible compatible image] *************************************** +skipping: [localhost] => (item=docker.io/pycontribs/centos:7) + +TASK [Determine the CMD directives] ******************************************** +ok: [localhost] => (item="centos7 command: None specified") + +TASK [Remove possible pre-existing containers] ********************************* +changed: [localhost] + +TASK [Discover local podman networks] ****************************************** +skipping: [localhost] => (item=centos7: None specified) + +TASK [Create podman network dedicated to this scenario] ************************ +skipping: [localhost] + +TASK [Create molecule instance(s)] ********************************************* +changed: [localhost] => (item=centos7) + +TASK [Wait for instance(s) creation to complete] ******************************* +failed: [localhost] (item=centos7) => {"ansible_job_id": "655377812848.4313", "ansible_loop_var": "item", "attempts": 1, "changed": true, "cmd": ["/usr/bin/podman", "run", "-d", "--name", "centos7", "--hostname=centos7", "docker.io/pycontribs/centos:7", "bash", "-c", "while true; do sleep 10000; done"], "delta": "0:00:00.062807", "end": "2022-09-11 11:35:22.013978", "finished": 1, "item": {"ansible_job_id": "655377812848.4313", "ansible_loop_var": "item", "changed": true, "failed": false, "finished": 0, "item": {"image": "docker.io/pycontribs/centos:7", "name": "centos7", "pre_build_image": true}, "results_file": "/root/.ansible_async/655377812848.4313", "started": 1}, "msg": "non-zero return code", "rc": 125, "start": "2022-09-11 11:35:21.951171", "stderr": "Error: invalid config provided: cannot set hostname when running in the host UTS namespace: invalid configuration", "stderr_lines": ["Error: invalid config provided: cannot set hostname when running in the host UTS namespace: invalid configuration"], "stdout": "", "stdout_lines": []} + +PLAY RECAP ********************************************************************* +localhost : ok=7 changed=2 unreachable=0 failed=1 skipped=5 rescued=0 ignored=0 + +CRITICAL Ansible return code was 2, command was: ['ansible-playbook', '--inventory', '/root/.cache/molecule/vector-role/podman/inventory', '--skip-tags', 'molecule-notest,notest', '/opt/vector-role/.tox/py39-ansible30/lib/python3.9/site-packages/molecule_podman/playbooks/create.yml'] +WARNING An error occurred during the test sequence action: 'create'. Cleaning up. +INFO Running podman > cleanup +WARNING Skipping, cleanup playbook not configured. +INFO Running podman > destroy + +PLAY [Destroy] ***************************************************************** + +TASK [Destroy molecule instance(s)] ******************************************** +changed: [localhost] => (item={'image': 'docker.io/pycontribs/centos:7', 'name': 'centos7', 'pre_build_image': True}) + +TASK [Wait for instance(s) deletion to complete] ******************************* +changed: [localhost] => (item={'started': 1, 'finished': 0, 'ansible_job_id': '954966452117.4363', 'results_file': '/root/.ansible_async/954966452117.4363', 'changed': True, 'failed': False, 'item': {'image': 'docker.io/pycontribs/centos:7', 'name': 'centos7', 'pre_build_image': True}, 'ansible_loop_var': 'item'}) + +PLAY RECAP ********************************************************************* +localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 + +INFO Pruning extra files from scenario ephemeral directory +ERROR: InvocationError for command /opt/vector-role/.tox/py39-ansible30/bin/molecule test -s podman --destroy always (exited with code 1) +_______________________________________________________ summary ________________________________________________________ +ERROR: py37-ansible210: commands failed +ERROR: py37-ansible30: commands failed +ERROR: py39-ansible210: commands failed +ERROR: py39-ansible30: commands failed +[root@a906a0f6163d vector-role]# +``` + +Провалв прогоне роли **Tox** по причине - при выполнении драйвером **molecule_podman** команды `/usr/bin/podman run -d --name centos7 --hostname=centos7 docker.io/pycontribs/centos:7 bash -c "while true; do sleep 10000; done"` +**podman** не удалось создать контейнер ( `Error: invalid config provided: cannot set hostname when running in the host UTS namespace: invalid configuration`), так как неверно задана конфигурация контейнера (нельзя указывать `--hostname`). + +> :exclamation: Решение данной проблемы я не вижу, ибо при модификации кода модуля **molecule_podman** с целью удаления этого параметра в данном случе не достаточно - возникают другие ошибки. НО! Если модифицировать `tox.ini` следующим образом: + +```ini +[tox] +minversion = 1.8 +basepython = python3.6 +envlist = py{37,39}-ansible{210,30} +skipsdist = true +[testenv] +passenv = * +deps = + -r tox-requirements.txt + ansible210: ansible<3.0 + ansible30: ansible<3.1 +commands = + {posargs:ansible --version} + {posargs:python3 --version} + {posargs:python3 test.py} +``` + +а файл `test.py` наполнить следующим содержимым: + +```python +#!/usr/bin/env python3 +print("OK\n") +``` + +то успешное выполнение **Tox** будет выглядеть следующим образом: + +```console +[root@cd1242c242a5 vector-role]# tox +py37-ansible210 installed: ansible==2.10.7,ansible-base==2.10.17,ansible-compat==1.0.0,ansible-lint==5.1.3,arrow==1.2.3,bcrypt==4.0.0,binaryornot==0.4.4,bracex==2.3.post1,cached-property==1.5.2,Cerberus==1.3.2,certifi==2022.6.15.1,cffi==1.15.1,chardet==5.0.0,charset-normalizer==2.1.1,click==8.1.3,click-help-colors==0.9.1,commonmark==0.9.1,cookiecutter==2.1.1,cryptography==38.0.1,distro==1.7.0,enrich==1.2.7,idna==3.3,importlib-metadata==4.12.0,Jinja2==3.1.2,jinja2-time==0.2.0,jmespath==1.0.1,lxml==4.9.1,MarkupSafe==2.1.1,molecule==3.4.0,molecule-podman==1.0.1,packaging==21.3,paramiko==2.11.0,pathspec==0.10.1,pluggy==0.13.1,pycparser==2.21,Pygments==2.13.0,PyNaCl==1.5.0,pyparsing==3.0.9,python-dateutil==2.8.2,python-slugify==6.1.2,PyYAML==5.4.1,requests==2.28.1,rich==12.5.1,ruamel.yaml==0.17.21,ruamel.yaml.clib==0.2.6,selinux==0.2.1,six==1.16.0,subprocess-tee==0.3.5,tenacity==8.0.1,text-unidecode==1.3,typing_extensions==4.3.0,urllib3==1.26.12,wcmatch==8.4,yamllint==1.26.3,zipp==3.8.1 +py37-ansible210 run-test-pre: PYTHONHASHSEED='1020171791' +py37-ansible210 run-test: commands[0] | ansible --version +ansible 2.10.17 + config file = None + configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] + ansible python module location = /opt/vector-role/.tox/py37-ansible210/lib/python3.7/site-packages/ansible + executable location = /opt/vector-role/.tox/py37-ansible210/bin/ansible + python version = 3.7.10 (default, Jun 13 2022, 19:37:24) [GCC 8.5.0 20210514 (Red Hat 8.5.0-10)] +py37-ansible210 run-test: commands[1] | python3 --version +Python 3.7.10 +py37-ansible210 run-test: commands[2] | python3 test.py +OK + +py37-ansible30 installed: ansible==3.0.0,ansible-base==2.10.17,ansible-compat==1.0.0,ansible-lint==5.1.3,arrow==1.2.3,bcrypt==4.0.0,binaryornot==0.4.4,bracex==2.3.post1,cached-property==1.5.2,Cerberus==1.3.2,certifi==2022.6.15.1,cffi==1.15.1,chardet==5.0.0,charset-normalizer==2.1.1,click==8.1.3,click-help-colors==0.9.1,commonmark==0.9.1,cookiecutter==2.1.1,cryptography==38.0.1,distro==1.7.0,enrich==1.2.7,idna==3.3,importlib-metadata==4.12.0,Jinja2==3.1.2,jinja2-time==0.2.0,jmespath==1.0.1,lxml==4.9.1,MarkupSafe==2.1.1,molecule==3.4.0,molecule-podman==1.0.1,packaging==21.3,paramiko==2.11.0,pathspec==0.10.1,pluggy==0.13.1,pycparser==2.21,Pygments==2.13.0,PyNaCl==1.5.0,pyparsing==3.0.9,python-dateutil==2.8.2,python-slugify==6.1.2,PyYAML==5.4.1,requests==2.28.1,rich==12.5.1,ruamel.yaml==0.17.21,ruamel.yaml.clib==0.2.6,selinux==0.2.1,six==1.16.0,subprocess-tee==0.3.5,tenacity==8.0.1,text-unidecode==1.3,typing_extensions==4.3.0,urllib3==1.26.12,wcmatch==8.4,yamllint==1.26.3,zipp==3.8.1 +py37-ansible30 run-test-pre: PYTHONHASHSEED='1020171791' +py37-ansible30 run-test: commands[0] | ansible --version +ansible 2.10.17 + config file = None + configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] + ansible python module location = /opt/vector-role/.tox/py37-ansible30/lib/python3.7/site-packages/ansible + executable location = /opt/vector-role/.tox/py37-ansible30/bin/ansible + python version = 3.7.10 (default, Jun 13 2022, 19:37:24) [GCC 8.5.0 20210514 (Red Hat 8.5.0-10)] +py37-ansible30 run-test: commands[1] | python3 --version +Python 3.7.10 +py37-ansible30 run-test: commands[2] | python3 test.py +OK + +py39-ansible210 installed: ansible==2.10.7,ansible-base==2.10.17,ansible-compat==2.2.0,ansible-lint==5.1.3,arrow==1.2.3,attrs==22.1.0,bcrypt==4.0.0,binaryornot==0.4.4,bracex==2.3.post1,Cerberus==1.3.2,certifi==2022.6.15.1,cffi==1.15.1,chardet==5.0.0,charset-normalizer==2.1.1,click==8.1.3,click-help-colors==0.9.1,commonmark==0.9.1,cookiecutter==2.1.1,cryptography==38.0.1,distro==1.7.0,enrich==1.2.7,idna==3.3,Jinja2==3.1.2,jinja2-time==0.2.0,jmespath==1.0.1,jsonschema==4.16.0,lxml==4.9.1,MarkupSafe==2.1.1,molecule==3.4.0,molecule-podman==1.0.1,packaging==21.3,paramiko==2.11.0,pathspec==0.10.1,pluggy==0.13.1,pycparser==2.21,Pygments==2.13.0,PyNaCl==1.5.0,pyparsing==3.0.9,pyrsistent==0.18.1,python-dateutil==2.8.2,python-slugify==6.1.2,PyYAML==5.4.1,requests==2.28.1,rich==12.5.1,ruamel.yaml==0.17.21,ruamel.yaml.clib==0.2.6,selinux==0.2.1,six==1.16.0,subprocess-tee==0.3.5,tenacity==8.0.1,text-unidecode==1.3,urllib3==1.26.12,wcmatch==8.4,yamllint==1.26.3 +py39-ansible210 run-test-pre: PYTHONHASHSEED='1020171791' +py39-ansible210 run-test: commands[0] | ansible --version +ansible 2.10.17 + config file = None + configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] + ansible python module location = /opt/vector-role/.tox/py39-ansible210/lib/python3.9/site-packages/ansible + executable location = /opt/vector-role/.tox/py39-ansible210/bin/ansible + python version = 3.9.2 (default, Jun 13 2022, 19:42:33) [GCC 8.5.0 20210514 (Red Hat 8.5.0-10)] +py39-ansible210 run-test: commands[1] | python3 --version +Python 3.9.2 +py39-ansible210 run-test: commands[2] | python3 test.py +OK + +py39-ansible30 installed: ansible==3.0.0,ansible-base==2.10.17,ansible-compat==2.2.0,ansible-lint==5.1.3,arrow==1.2.3,attrs==22.1.0,bcrypt==4.0.0,binaryornot==0.4.4,bracex==2.3.post1,Cerberus==1.3.2,certifi==2022.6.15.1,cffi==1.15.1,chardet==5.0.0,charset-normalizer==2.1.1,click==8.1.3,click-help-colors==0.9.1,commonmark==0.9.1,cookiecutter==2.1.1,cryptography==38.0.1,distro==1.7.0,enrich==1.2.7,idna==3.3,Jinja2==3.1.2,jinja2-time==0.2.0,jmespath==1.0.1,jsonschema==4.16.0,lxml==4.9.1,MarkupSafe==2.1.1,molecule==3.4.0,molecule-podman==1.0.1,packaging==21.3,paramiko==2.11.0,pathspec==0.10.1,pluggy==0.13.1,pycparser==2.21,Pygments==2.13.0,PyNaCl==1.5.0,pyparsing==3.0.9,pyrsistent==0.18.1,python-dateutil==2.8.2,python-slugify==6.1.2,PyYAML==5.4.1,requests==2.28.1,rich==12.5.1,ruamel.yaml==0.17.21,ruamel.yaml.clib==0.2.6,selinux==0.2.1,six==1.16.0,subprocess-tee==0.3.5,tenacity==8.0.1,text-unidecode==1.3,urllib3==1.26.12,wcmatch==8.4,yamllint==1.26.3 +py39-ansible30 run-test-pre: PYTHONHASHSEED='1020171791' +py39-ansible30 run-test: commands[0] | ansible --version +ansible 2.10.17 + config file = None + configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] + ansible python module location = /opt/vector-role/.tox/py39-ansible30/lib/python3.9/site-packages/ansible + executable location = /opt/vector-role/.tox/py39-ansible30/bin/ansible + python version = 3.9.2 (default, Jun 13 2022, 19:42:33) [GCC 8.5.0 20210514 (Red Hat 8.5.0-10)] +py39-ansible30 run-test: commands[1] | python3 --version +Python 3.9.2 +py39-ansible30 run-test: commands[2] | python3 test.py +OK + +_______________________________________________________ summary ________________________________________________________ + py37-ansible210: commands succeeded + py37-ansible30: commands succeeded + py39-ansible210: commands succeeded + py39-ansible30: commands succeeded + congratulations :) +[root@cd1242c242a5 vector-role]# +``` +--- + ## Необязательная часть @@ -37,6 +3716,7 @@ В качестве решения пришлите ссылки и скриншоты этапов выполнения задания. +Позже,может,быть --- ### Как оформить ДЗ? diff --git a/08-ansible-05-testing/clickhouse/docker-compose.yml b/08-ansible-05-testing/clickhouse/docker-compose.yml new file mode 100644 index 000000000..e1e6a8ffb --- /dev/null +++ b/08-ansible-05-testing/clickhouse/docker-compose.yml @@ -0,0 +1,13 @@ +--- +version: "2.4" + +services: + clickhouse_vm: + image: pycontribs/centos:7 + container_name: clickhouse + ports: + - "9000:9000" + - "8123:8123" + tty: true + network_mode: bridge +... diff --git a/08-ansible-05-testing/clickhouse/group_vars/clickhouse.yml b/08-ansible-05-testing/clickhouse/group_vars/clickhouse.yml new file mode 100644 index 000000000..4b5cb37cf --- /dev/null +++ b/08-ansible-05-testing/clickhouse/group_vars/clickhouse.yml @@ -0,0 +1,20 @@ +--- +clickhouse_version: "22.3.3.44" +clickhouse_listen_host: + - "0.0.0.0" +clickhouse_dbs_custom: + - { name: "logs" } +clickhouse_profiles_default: + default: + date_time_input_format: best_effort +clickhouse_users_custom: + - { name: "logger", + password: "logger", + profile: "default", + quota: "default", + networks: { '::/0' }, + dbs: ["logs"], + access_management: 0 } + +file_log_structure: "file String, host String, message String, timestamp DateTime64" +... diff --git a/08-ansible-05-testing/clickhouse/inventory/main.yml b/08-ansible-05-testing/clickhouse/inventory/main.yml new file mode 100644 index 000000000..1259a10aa --- /dev/null +++ b/08-ansible-05-testing/clickhouse/inventory/main.yml @@ -0,0 +1,7 @@ +--- +clickhouse: + hosts: + clickhouse: + ansible_host: "clickhouse" + ansible_connection: docker +... diff --git a/08-ansible-05-testing/clickhouse/requirements.yml b/08-ansible-05-testing/clickhouse/requirements.yml new file mode 100644 index 000000000..1c6a0c563 --- /dev/null +++ b/08-ansible-05-testing/clickhouse/requirements.yml @@ -0,0 +1,6 @@ +--- + - src: git@github.com:AlexeySetevoi/ansible-clickhouse.git + scm: git + version: "1.13" + name: clickhouse +... diff --git a/08-ansible-05-testing/clickhouse/site.yml b/08-ansible-05-testing/clickhouse/site.yml new file mode 100644 index 000000000..dbcc215e9 --- /dev/null +++ b/08-ansible-05-testing/clickhouse/site.yml @@ -0,0 +1,56 @@ +--- +- name: Install Clickhouse + hosts: clickhouse + vars: + script_v3: false + pre_tasks: + - name: Detect python3 version + ansible.builtin.command: "python3 --version" + register: cmd_res + changed_when: false + failed_when: false + - name: Set python version of script to 3 + ansible.builtin.set_fact: + script_v3: true + when: cmd_res.rc == 0 + - name: Download SystemD replacer v3 + become: true + ansible.builtin.get_url: + url: https://raw.githubusercontent.com/gdraheim/docker-systemctl-replacement/master/files/docker/systemctl3.py + dest: /systemctl.py + mode: "a+x" + when: script_v3 == true + - name: Download SystemD replacer v2 + become: true + ansible.builtin.get_url: + url: https://raw.githubusercontent.com/gdraheim/docker-systemctl-replacement/master/files/docker/systemctl.py + dest: /systemctl.py + mode: "a+x" + when: script_v3 == false + - name: Create systemd directories + ansible.builtin.file: + path: "{{ item }}" + state: directory + recurse: true + loop: + - /run/systemd/system + - /usr/lib/systemd/system + - name: Replace systemctl + become: true + ansible.builtin.copy: + src: /systemctl.py + remote_src: true + dest: /usr/bin/systemctl + mode: "+x" + force: true + - name: ReCollect system info + ansible.builtin.setup: + roles: + - clickhouse + post_tasks: + - name: Create tables + ansible.builtin.command: "clickhouse-client --host 127.0.0.1 -q 'CREATE TABLE logs.file_log ({{ file_log_structure }}) ENGINE = Log();'" + register: create_tbl + failed_when: create_tbl.rc != 0 and create_tbl.rc != 57 + changed_when: create_tbl.rc == 0 +... diff --git a/08-ansible-05-testing/fullstack/README.md b/08-ansible-05-testing/fullstack/README.md new file mode 100644 index 000000000..295c7fa1e --- /dev/null +++ b/08-ansible-05-testing/fullstack/README.md @@ -0,0 +1,34 @@ +# Сценарий проверки всего стека **Clickhouse** + **Vector** + **Lighthuse** + +## Зависимости + +- Сторонняя роль установки и настройки [Clickhouse](https://github.com/NamorNinayzuk/ansible-clickhouse) +- Собственная роль установки и настройки [Vector](https://github.com/NamorNinayzuk/vector-role) +- Собственная роль установки и настройки [Lighthouse](https://github.com/NamorNinayzuk/lighthouse-role) + +> Также в файле зависимостей для информации указана коллекция, содержащая собственные роли - коллекции автоматически не устанавливаются +## Описания контейнеров + +Для тестирования используется драйвер **Docker**. + +Роли поддерживают **CentOS 7**, **CentOS 8**, **Ubuntu**, но согласно поставленной задаче используем только **CentOS 7** + +Создаются три контейнера: +- `clickhouse` с открытыми портами **9000** и **8123** для возможности ручной проверки через GUI +- `vector` +- `lighthouse` с открытым портом **8080** для возможности ручного контроля + +## Предназначение файлов + +- `cleanup.yml` - **playbook Molecule** для этапа **Cleanup** - предназначен для чистки сервисов от следов проверок, а именно обнуляется таблица в **Clickhouse** и удаляются контролируемые **Vector** файлы +- `converge` - **playbook Molecule** для этапа **Converge** - подготовка инфраструктуры сервисов: устанавливает скрипт эмулятор SystemD во все контейнеры и применяет все роли к соответствующим контейнерам +- `molecule` - конфигурационный файл **Molecule** +- `requirements` - файл зависимостей **Molecule** для **Ansible-galaxy** +- `verify` - основной **playbook Molecule** для этапа **verify** - верифицирует установку **Vector** и **Lighthouse**, создаёт контролируемые **Vector** файлы и наполняет их содержимым +- `verify_clickhouse` - **playbook** запрашивает данные из **Clickhouse** и сравнивает их с эталоном. Реализован отдельно с поддержкой нескольких итераций, так как из-за особенностей реализации **Vector** не все данные сразу отправляются в **Clickhouse**. + +## Примечания + +Для успешного функционирования всего стека ролям сервисов **Vector** и **Lighthouse** нужно передать IP адрес сервиса **Clickhouse** внутри сети контейнеров **Docker**. +По умолчанию **Molecule** создаёт контейнеры в сети в драйвером **bridge**. Определение IP адреса контейнера **Clickhouse** и передача его другим осуществляется в **play** с именем `Detect Clickhouse IP`. +Для ручного контроля используемые порты проброшены в основную систему и доступны по локальному IP адресу `127.0.0.1` diff --git a/08-ansible-05-testing/fullstack/molecule/default/cleanup.yml b/08-ansible-05-testing/fullstack/molecule/default/cleanup.yml new file mode 100644 index 000000000..037e4a3bb --- /dev/null +++ b/08-ansible-05-testing/fullstack/molecule/default/cleanup.yml @@ -0,0 +1,22 @@ +--- +- name: Clean up Clickhouse + hosts: clickhouse + gather_facts: false + tasks: + - name: Truncate tables + ansible.builtin.command: "clickhouse-client --host 127.0.0.1 -q 'TRUNCATE TABLE logs.file_log;'" + register: trunc_tbl + changed_when: false + failed_when: trunc_tbl.rc != 0 +- name: Clean up Vector + hosts: vector + gather_facts: false + tasks: + - name: Delete log files + ansible.builtin.file: + path: "{{ item }}" + state: absent + loop: + - "~/test/one.log" + - "~/test/two.log" +... diff --git a/08-ansible-05-testing/fullstack/molecule/default/converge.yml b/08-ansible-05-testing/fullstack/molecule/default/converge.yml new file mode 100644 index 000000000..67d480013 --- /dev/null +++ b/08-ansible-05-testing/fullstack/molecule/default/converge.yml @@ -0,0 +1,108 @@ +--- +- name: Prepare containers + hosts: all + vars: + script_v3: false + tasks: + - name: Detect python3 version + ansible.builtin.command: "python3 --version" + register: cmd_res + changed_when: false + failed_when: false + - name: Set python version of script to 3 + ansible.builtin.set_fact: + script_v3: true + when: cmd_res.rc == 0 + - name: Download SystemD replacer v3 + become: true + ansible.builtin.get_url: + url: https://raw.githubusercontent.com/gdraheim/docker-systemctl-replacement/master/files/docker/systemctl3.py + dest: /systemctl.py + mode: "a+x" + when: script_v3 == true + - name: Download SystemD replacer v2 + become: true + ansible.builtin.get_url: + url: https://raw.githubusercontent.com/gdraheim/docker-systemctl-replacement/master/files/docker/systemctl.py + dest: /systemctl.py + mode: "a+x" + when: script_v3 == false + - name: Create systemd directories + ansible.builtin.file: + path: "{{ item }}" + state: directory + recurse: true + loop: + - /run/systemd/system + - /usr/lib/systemd/system + - name: Replace systemctl + become: true + ansible.builtin.copy: + src: /systemctl.py + remote_src: true + dest: /usr/bin/systemctl + mode: "+x" + force: true + #- name: ReCollect system info + # ansible.builtin.setup: + +- name: Install Clickhouse + hosts: clickhouse + roles: + - clickhouse-role + vars: + clickhouse_version: "22.3.3.44" + clickhouse_listen_host: + - "0.0.0.0" + clickhouse_dbs_custom: + - { name: "logs" } + clickhouse_profiles_default: + default: + date_time_input_format: best_effort + clickhouse_users_custom: + - { name: "user", + password: "userlog", + profile: "default", + quota: "default", + networks: { '::/0' }, + dbs: ["logs"], + access_management: 0 } + file_log_structure: "file String, host String, message String, timestamp DateTime64" + post_tasks: + - name: Create tables + ansible.builtin.command: "clickhouse-client --host 127.0.0.1 -q 'CREATE TABLE logs.file_log ({{ file_log_structure }}) ENGINE = Log();'" + register: create_tbl + failed_when: create_tbl.rc != 0 and create_tbl.rc != 57 + changed_when: create_tbl.rc == 0 + +- name: Detect Clickhouse IP + hosts: vector, lighthouse + tasks: + - name: Get Clickhouse IP from docker engine + ansible.builtin.command: "docker inspect -f {{'{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}'}} clickhouse" + register: cmd_res + changed_when: false + failed_when: cmd_res.rc != 0 + delegate_to: localhost + - name: Set Clickhouse IP to facts + ansible.builtin.set_fact: + clickhouse_host: "{{ cmd_res.stdout }}" + when: cmd_res.rc == 0 + +- name: Install Vector + hosts: vector + roles: + - vector-role + vars: + clickhouse_user: "user" + clickhouse_password: "userlog" + +- name: Install Lighthouse + hosts: lighthouse + roles: + - lighthouse-role + vars: + clickhouse_user: "user" + clickhouse_password: "userlog" + lighthouse_port: "8080" +... diff --git a/08-ansible-05-testing/fullstack/molecule/default/molecule.yml b/08-ansible-05-testing/fullstack/molecule/default/molecule.yml new file mode 100644 index 000000000..05c00e360 --- /dev/null +++ b/08-ansible-05-testing/fullstack/molecule/default/molecule.yml @@ -0,0 +1,57 @@ +--- +dependency: + name: galaxy +driver: + name: docker +platforms: + - name: clickhouse + hostname: clickhouse + image: docker.io/pycontribs/centos:7 + pre_build_image: true + published_ports: + - "0.0.0.0:9000:9000/tcp" + - "[::]:9000:9000/tcp" + - "0.0.0.0:8123:8123/tcp" + - "[::]:8123:8123/tcp" + - name: vector + hostname: vector + image: docker.io/pycontribs/centos:7 + pre_build_image: true + - name: lighthouse + hostname: lighthouse + image: docker.io/pycontribs/centos:7 + pre_build_image: true + published_ports: + - "0.0.0.0:8080:8080/tcp" + - "[::]:8080:8080/tcp" +provisioner: + name: ansible +verifier: + name: ansible +scenario: + create_sequence: + - dependency + - create + check_sequence: + - dependency + - destroy + - create + - converge + - check + - destroy + converge_sequence: + - converge + destroy_sequence: + - destroy + test_sequence: + - dependency + - lint + - syntax + - destroy + - create + - converge + - idempotence + - cleanup + - verify + - destroy +... diff --git a/08-ansible-05-testing/fullstack/molecule/default/requirements.yml b/08-ansible-05-testing/fullstack/molecule/default/requirements.yml new file mode 100644 index 000000000..c941cb93a --- /dev/null +++ b/08-ansible-05-testing/fullstack/molecule/default/requirements.yml @@ -0,0 +1,13 @@ +--- +- name: clickhouse + src: git@github.com:NamorNinayzuk/ansible-clickhouse.git + scm: git + version: 0.0.3 +- name: vector + src: git@github.com:NamorNinayzuk/vector-role + scm: git + version: 0.0.4 +- name: lighthouse + src: git@github.com:NamorNinayzuk//lighthouse-role.git + scm: git + version: 0.0.3 diff --git a/08-ansible-05-testing/fullstack/molecule/default/verify.yml b/08-ansible-05-testing/fullstack/molecule/default/verify.yml new file mode 100644 index 000000000..c0ff1754f --- /dev/null +++ b/08-ansible-05-testing/fullstack/molecule/default/verify.yml @@ -0,0 +1,89 @@ +--- +- name: Verify vector installation + hosts: vector + gather_facts: false + tasks: + - name: Get information about vector + ansible.builtin.command: systemctl show vector + register: srv_res + changed_when: false + - name: Assert vector service + ansible.builtin.assert: + that: + - srv_res.rc == 0 + - "'ActiveState=active' in srv_res.stdout_lines" + - "'LoadState=loaded' in srv_res.stdout_lines" + - "'SubState=running' in srv_res.stdout_lines" + - name: Validate vector configuration + ansible.builtin.command: vector validate + register: vld_res + changed_when: false + - name: Assert vector healthcheck + ansible.builtin.assert: + that: + - vld_res.rc == 0 + - "'Validated' == {{ vld_res.stdout_lines | map('trim') | list }}[-1]" + +- name: Verify lighthouse installation + hosts: lighthouse + gather_facts: false + tasks: + - name: Get information about nginx + ansible.builtin.command: systemctl show nginx + register: srv_res + changed_when: false + - name: Assert nginx service + ansible.builtin.assert: + that: + - srv_res.rc == 0 + - "'ActiveState=active' in srv_res.stdout_lines" + - "'LoadState=loaded' in srv_res.stdout_lines" + - "'SubState=running' in srv_res.stdout_lines" + - name: Save server home page + ansible.builtin.get_url: + url: "http://127.0.0.1:8080" + dest: "~/index.html" + mode: "0644" + - name: Get home page + ansible.builtin.command: cat ~/index.html + register: fil_res + changed_when: false + - name: Assert lighthouse homepage + ansible.builtin.assert: + that: + - fil_res.rc == 0 + - fil_res.stdout.find('LightHouse') != -1 + +- name: Store data to vector + hosts: vector + gather_facts: false + tasks: + - name: Create files + ansible.builtin.command: "touch {{ item }} && sync" + loop: + - "~/test/one.log" + - "~/test/two.log" + - name: Write some data to file + ansible.builtin.shell: echo {{ item }} >> ~/test/one.log && sync + become: true + loop: + - "First line" + - "Second line" + - "Third line" + - name: Write some data to another file + ansible.builtin.shell: echo {{ item }} >> ~/test/two.log && sync + become: true + loop: + - "A little bit of data" + - "Some amount of data" + - "More data" + +- name: Check Clickhouse data + hosts: clickhouse + gather_facts: false + vars: + retry_count: 5 + retry_delay: 1 + tasks: + - include_tasks: verify_clickhouse.yml +... diff --git a/08-ansible-05-testing/fullstack/molecule/default/verify_clickhouse.yml b/08-ansible-05-testing/fullstack/molecule/default/verify_clickhouse.yml new file mode 100644 index 000000000..936429887 --- /dev/null +++ b/08-ansible-05-testing/fullstack/molecule/default/verify_clickhouse.yml @@ -0,0 +1,45 @@ +--- +- name: Repeatable check + block: + - name: Decrease attempts count + set_fact: + retry_count: "{{ retry_count | int - 1 }}" + - name: Get clickhouse response + ansible.builtin.uri: + url: http://127.0.0.1:8123/?user=logger&password=logger&add_http_cors_header=1&output_format_json_quote_64bit_integers=1&database=logs&result_overflow_mode=throw&readonly=1 + method: "POST" + body_format: "raw" + body: "SELECT file, message FROM logs.file_log LIMIT 6 FORMAT Values" + return_content: true + register: res + - name: Compose acquired data to list + ansible.builtin.set_fact: + data_raw: '{{ res.content[1:-1] | split("),(") }}' + - name: Print acquired data + debug: + var: data_raw + - name: Verify acquired data + ansible.builtin.assert: + that: + - data_raw | length == 6 + - data_raw == + [ + "'/root/test/one.log','First line'", + "'/root/test/one.log','Second line'", + "'/root/test/one.log','Third line'", + "'/root/test/two.log','A little bit of data'", + "'/root/test/two.log','Some amount of data'", + "'/root/test/two.log','More data'" + ] + rescue: + - name: Fail if retry count is zero + ansible.builtin.fail: + msg: "No more attempts left" + when: retry_count | int == 0 + - name: Try another attempt after delay + ansible.builtin.wait_for: + timeout: "{{ retry_delay }}" # seconds + delegate_to: localhost + become: false + - include_tasks: verify_clickhouse.yml +... diff --git a/08-ansible-06-module/README.md b/08-ansible-06-module/README.md index 8ea618277..45740ad97 100644 --- a/08-ansible-06-module/README.md +++ b/08-ansible-06-module/README.md @@ -1,175 +1,416 @@ -# Домашнее задание к занятию "6.Создание собственных модулей" +# Домашнее задание по лекции "8.6 Создание собственных Modules" ## Подготовка к выполнению -1. Создайте пустой публичных репозиторий в любом своём проекте: `my_own_collection` -2. Скачайте репозиторий ansible: `git clone https://github.com/ansible/ansible.git` по любому удобному вам пути -3. Зайдите в директорию ansible: `cd ansible` -4. Создайте виртуальное окружение: `python3 -m venv venv` -5. Активируйте виртуальное окружение: `. venv/bin/activate`. Дальнейшие действия производятся только в виртуальном окружении -6. Установите зависимости `pip install -r requirements.txt` -7. Запустить настройку окружения `. hacking/env-setup` -8. Если все шаги прошли успешно - выйти из виртуального окружения `deactivate` -9. Ваше окружение настроено, для того чтобы запустить его, нужно находиться в директории `ansible` и выполнить конструкцию `. venv/bin/activate && . hacking/env-setup` + +> 1. Создайте пустой публичных репозиторий в любом своём проекте: `my_own_collection` +> 2. Скачайте репозиторий ansible: `git clone https://github.com/ansible/ansible.git` по любому удобному вам пути +> 3. Зайдите в директорию ansible: `cd ansible` +> 4. Создайте виртуальное окружение: `python3 -m venv venv` +> 5. Активируйте виртуальное окружение: `. venv/bin/activate`. Дальнейшие действия производятся только в виртуальном окружении +> 6. Установите зависимости `pip install -r requirements.txt` +> 7. Запустить настройку окружения `. hacking/env-setup` +> 8. Если все шаги прошли успешно - выйти из виртуального окружения `deactivate` +> 9. Ваше окружение настроено, для того чтобы запустить его, нужно находиться в директории `ansible` и выполнить конструкцию `. venv/bin/activate && . hacking/env-setup` +```console +root@debian:~/my-ansible-6/ansible$ python3 -m venv venv +root@debian:~/my-ansible-6/ansible$ . venv/bin/activate +(venv) root@debian:~/my-ansible-6/ansible$ pip install -r requirements.txt +Collecting jinja2>=3.0.0 + Using cached Jinja2-3.1.2-py3-none-any.whl (133 kB) +Collecting PyYAML>=5.1 + Downloading PyYAML-6.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (682 kB) + ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 682.2/682.2 kB 3.5 MB/s eta 0:00:00 +Collecting cryptography + Using cached cryptography-37.0.4-cp36-abi3-manylinux_2_24_x86_64.whl (4.1 MB) +Collecting packaging + Using cached packaging-21.3-py3-none-any.whl (40 kB) +Collecting resolvelib<0.9.0,>=0.5.3 + Using cached resolvelib-0.8.1-py2.py3-none-any.whl (16 kB) +Collecting MarkupSafe>=2.0 + Using cached MarkupSafe-2.1.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (25 kB) +Collecting cffi>=1.12 + Using cached cffi-1.15.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (441 kB) +Collecting pyparsing!=3.0.5,>=2.0.2 + Using cached pyparsing-3.0.9-py3-none-any.whl (98 kB) +Collecting pycparser + Using cached pycparser-2.21-py2.py3-none-any.whl (118 kB) +Installing collected packages: resolvelib, PyYAML, pyparsing, pycparser, MarkupSafe, packaging, jinja2, cffi, cryptography +Successfully installed MarkupSafe-2.1.1 PyYAML-6.0 cffi-1.15.1 cryptography-37.0.4 jinja2-3.1.2 packaging-21.3 pycparser-2.21 pyparsing-3.0.9 resolvelib-0.8.1 +(venv) root@debian:~/my-ansible-6/ansible$ . hacking/env-setup +running egg_info +creating lib/ansible_core.egg-info +writing lib/ansible_core.egg-info/PKG-INFO +writing dependency_links to lib/ansible_core.egg-info/dependency_links.txt +writing entry points to lib/ansible_core.egg-info/entry_points.txt +writing requirements to lib/ansible_core.egg-info/requires.txt +writing top-level names to lib/ansible_core.egg-info/top_level.txt +writing manifest file 'lib/ansible_core.egg-info/SOURCES.txt' +reading manifest file 'lib/ansible_core.egg-info/SOURCES.txt' +reading manifest template 'MANIFEST.in' +warning: no files found matching 'SYMLINK_CACHE.json' +warning: no previously-included files found matching 'docs/docsite/rst_warnings' +warning: no previously-included files found matching 'docs/docsite/rst/conf.py' +warning: no previously-included files found matching 'docs/docsite/rst/index.rst' +warning: no previously-included files found matching 'docs/docsite/rst/dev_guide/index.rst' +warning: no previously-included files matching '*' found under directory 'docs/docsite/_build' +warning: no previously-included files matching '*.pyc' found under directory 'docs/docsite/_extensions' +warning: no previously-included files matching '*.pyo' found under directory 'docs/docsite/_extensions' +warning: no files found matching '*.ps1' under directory 'lib/ansible/modules/windows' +warning: no files found matching '*.yml' under directory 'lib/ansible/modules' +warning: no files found matching 'validate-modules' under directory 'test/lib/ansible_test/_util/controller/sanity/validate-modules' +adding license file 'COPYING' +writing manifest file 'lib/ansible_core.egg-info/SOURCES.txt' + +Setting up Ansible to run out of checkout... + +PATH=/home/my-ansible-6/ansible/bin:/home/my-ansible-6/ansible/venv/bin:/home/.local/bin:/usr/bin:/home/.local/bin:/home/yandex-cloud/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games +PYTHONPATH=/home/my-ansible-6/ansible/test/lib:/home/my-ansible-6/ansible/lib +MANPATH=/home/my-ansible-6/ansible/docs/man:/usr/share/man:/usr/local/man:/usr/local/share/man + +Remember, you may wish to specify your host file with -i + +Done! + +(venv) root@debian:~/my-ansible-6/ansible$ deactivate +root@debian:~/my-ansible-6/ansible$ +``` ## Основная часть -Наша цель - написать собственный module, который мы можем использовать в своей role, через playbook. Всё это должно быть собрано в виде collection и отправлено в наш репозиторий. - -1. В виртуальном окружении создать новый `my_own_module.py` файл -2. Наполнить его содержимым: -```python -#!/usr/bin/python - -# Copyright: (c) 2018, Terry Jones -# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) -from __future__ import (absolute_import, division, print_function) -__metaclass__ = type - -DOCUMENTATION = r''' ---- -module: my_test - -short_description: This is my test module - -# If this is part of a collection, you need to use semantic versioning, -# i.e. the version is of the form "2.5.0" and not "2.4". -version_added: "1.0.0" - -description: This is my longer description explaining my test module. - -options: - name: - description: This is the message to send to the test module. - required: true - type: str - new: - description: - - Control to demo if the result of this module is changed or not. - - Parameter description can be a list as well. - required: false - type: bool -# Specify this value according to your collection -# in format of namespace.collection.doc_fragment_name -extends_documentation_fragment: - - my_namespace.my_collection.my_doc_fragment_name - -author: - - Your Name (@yourGitHubHandle) -''' - -EXAMPLES = r''' -# Pass in a message -- name: Test with a message - my_namespace.my_collection.my_test: - name: hello world - -# pass in a message and have changed true -- name: Test with a message and changed output - my_namespace.my_collection.my_test: - name: hello world - new: true - -# fail the module -- name: Test failure of the module - my_namespace.my_collection.my_test: - name: fail me -''' - -RETURN = r''' -# These are examples of possible return values, and in general should use other names for return values. -original_message: - description: The original name param that was passed in. - type: str - returned: always - sample: 'hello world' -message: - description: The output message that the test module generates. - type: str - returned: always - sample: 'goodbye' -''' - -from ansible.module_utils.basic import AnsibleModule - - -def run_module(): - # define available arguments/parameters a user can pass to the module - module_args = dict( - name=dict(type='str', required=True), - new=dict(type='bool', required=False, default=False) - ) - - # seed the result dict in the object - # we primarily care about changed and state - # changed is if this module effectively modified the target - # state will include any data that you want your module to pass back - # for consumption, for example, in a subsequent task - result = dict( - changed=False, - original_message='', - message='' - ) - - # the AnsibleModule object will be our abstraction working with Ansible - # this includes instantiation, a couple of common attr would be the - # args/params passed to the execution, as well as if the module - # supports check mode - module = AnsibleModule( - argument_spec=module_args, - supports_check_mode=True - ) - - # if the user is working with this module in only check mode we do not - # want to make any changes to the environment, just return the current - # state with no modifications - if module.check_mode: - module.exit_json(**result) - - # manipulate or modify the state as needed (this is going to be the - # part where your module will do what it needs to do) - result['original_message'] = module.params['name'] - result['message'] = 'goodbye' - - # use whatever logic you need to determine whether or not this module - # made any modifications to your target - if module.params['new']: - result['changed'] = True - - # during the execution of the module, if there is an exception or a - # conditional state that effectively causes a failure, run - # AnsibleModule.fail_json() to pass in the message and the result - if module.params['name'] == 'fail me': - module.fail_json(msg='You requested this to fail', **result) - - # in the event of a successful module execution, you will want to - # simple AnsibleModule.exit_json(), passing the key/value results - module.exit_json(**result) - - -def main(): - run_module() - - -if __name__ == '__main__': - main() -``` -Или возьмите данное наполнение из [статьи](https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_general.html#creating-a-module). - -3. Заполните файл в соответствии с требованиями ansible так, чтобы он выполнял основную задачу: module должен создавать текстовый файл на удалённом хосте по пути, определённом в параметре `path`, с содержимым, определённым в параметре `content`. -4. Проверьте module на исполняемость локально. -5. Напишите single task playbook и используйте module в нём. -6. Проверьте через playbook на идемпотентность. -7. Выйдите из виртуального окружения. -8. Инициализируйте новую collection: `ansible-galaxy collection init my_own_namespace.yandex_cloud_elk` -9. В данную collection перенесите свой module в соответствующую директорию. -10. Single task playbook преобразуйте в single task role и перенесите в collection. У role должны быть default всех параметров module -11. Создайте playbook для использования этой role. -12. Заполните всю документацию по collection, выложите в свой репозиторий, поставьте тег `1.0.0` на этот коммит. -13. Создайте .tar.gz этой collection: `ansible-galaxy collection build` в корневой директории collection. -14. Создайте ещё одну директорию любого наименования, перенесите туда single task playbook и архив c collection. -15. Установите collection из локального архива: `ansible-galaxy collection install .tar.gz` -16. Запустите playbook, убедитесь, что он работает. -17. В ответ необходимо прислать ссылки на collection и tar.gz архив, а также скриншоты выполнения пунктов 4, 6, 15 и 16. +Наша цель - написать собственный module, который мы можем использовать в своей role, через playbook. +Всё это должно быть собрано в виде collection и отправлено в наш репозиторий. + +### 1. В виртуальном окружении создать новый `my_own_module.py` файл + +Встроенные модули **Ansible** находятся по пути `lib/ansible/modules`, там и будем создавать файл, только с именем `file_content.py` + +--- + +### 2. Наполнить его содержимым из [статьи](https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_general.html#creating-a-module) + +--- + +### 3. Перепишите файл в соответствии с требованиями ansible так, чтобы он выполнял следующую задачу + +**Module** должен создавать текстовый файл на удалённом хосте по пути, определённом в параметре `path`, с содержимым, определённым в параметре `content`. + +Готовый файл в репозитории: [file_content.py](https://github.com/NamorNinayzuk/ansible-collection/blob/main/plugins/modules/file_content.py) + +--- + +### 4. Проверьте module на исполняемость локально. + +Для проверки модуля на исполняемость локально можно запустить его непостредственно в **Python** передав **JSON** файл с нужным содержимым + +Пример передаваемого **JSON** файла +```json +{ + "ANSIBLE_MODULE_ARGS": { + "path": "test123", + "content": "test line" + } +} +``` + +Команда запуска: `python -m <модуль> <файл>`, где `<модуль>` - путь к исполняемому модулю (`ansible.modules.file_content` для модуля `file_content` из каталога встроенных в **Ansible**), а `<файл>` - **JSON** файл с входными данными + +Первый запуск, когда файла нет: +```console +(venv) root@debian:~/my-ansible-6/ansible$ python -m ansible.modules.file_content payload.json + +{"changed": true, "path": "test123", "content": "test line", "status": "created", "uid": 1000, "gid": 1000, "owner": "sa", "group": "sa", "mode": "0644", "state": "file", "size": 9, "invocation": {"module_args": {"path": "test123", "content": "test line"}}} +(venv) root@debian:~/my-ansible-6/ansible$ +``` + +Результат: +```json +{ + "changed": true, + "path": "test123", + "content": "test line", + "status": "created", + "uid": 1000, + "gid": 1000, + "owner": "sa", + "group": "sa", + "mode": "0644", + "state": "file", + "size": 9, + "invocation": { + "module_args": { + "path": "test123", + "content": "test line" + } + } +} +``` + +Второй запуск, когда файл уже существует: +```console +(venv) root@debian:~/my-ansible-6/ansible$ python -m ansible.modules.file_content payload.json + +{"changed": false, "path": "test123", "content": "test line", "status": "resisted", "uid": 1000, "gid": 1000, "owner": "sa", "group": "sa", "mode": "0644", "state": "file", "size": 9, "invocation": {"module_args": {"path": "test123", "content": "test line"}}} +(venv) root@debian:~/my-ansible-6/ansible$ +``` + +Результат: +```json +{ + "changed": false, + "path": "test123", + "content": "test line", + "status": "resisted", + "uid": 1000, + "gid": 1000, + "owner": "sa", + "group": "sa", + "mode": "0644", + "state": "file", + "size": 9, + "invocation": { + "module_args": { + "path": "test123", + "content": "test line" + } + } +} +``` + +Третий запуск с изменённым параметром `content`: +```console +(venv) root@debian:~/my-ansible-6/ansible$ python -m ansible.modules.file_content payload.json + +{"changed": true, "path": "test123", "content": "test line +add", "status": "modified", "uid": 1000, "gid": 1000, "owner": "sa", "group": "sa", "mode": "0644", "state": "file", "size": 14, "invocation": {"module_args": {"path": "test123", "content": "test line +add"}}} +(venv) root@debian:~/my-ansible-6/ansible$ +``` + +Результат: +```json +{ + "changed": true, + "path": "test123", + "content": "test line +add", + "status": "modified", + "uid": 1000, + "gid": 1000, + "owner": "sa", + "group": "sa", + "mode": "0644", + "state": "file", + "size": 14, + "invocation": { + "module_args": { + "path": "test123", + "content": "test line +add" + } + } +} +``` + +**JSON** блок вывода дополняется следующими параметрами: **uid**, **gid**, **owner**, **group**, **mode**, **state**, **size**. Так происходит всегда, когда в модуле есть параметр с именем `path` или `dest`. Само добавление выполняется при завершении модуля (функция `exit_json`). + +Также в результат всегда добавляется блок **invocation** (если его нет) с параметрами вызова модуля. Добавление также выпоняется при завершении модуля. + +--- + +### 5. Напишите single task playbook и используйте module в нём. + +Пример подобного **playbook**: +```yaml +--- +- name: Test module + hosts: localhost + tasks: + - name: run on test123 + file_content: + path: 'test123' + content: 'write test' +... +``` + +Исполнение **playbook**: (файла **test123** нет) +```console +(venv) root@debian:~/my-ansible-6/ansible$ ansible-playbook site.yml +[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are +modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and +can become unstable at any point. +[WARNING]: No inventory was parsed, only implicit localhost is available +[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match +'all' + +PLAY [Test module] ***************************************************************************************************** + +TASK [Gathering Facts] ************************************************************************************************* +ok: [localhost] + +TASK [run on test123 - write] ******************************************************************************************* +changed: [localhost] + +PLAY RECAP ************************************************************************************************************* +localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 + +(venv) root@debian:~/my-ansible-6/ansible$ +``` + +--- + +### 6. Проверьте через playbook на идемпотентность. + +Повторный запуск **playbook** (файл **test123** уже существует): +```console +(venv) root@debian:~/my-ansible-6/ansible$ ansible-playbook site.yml +[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are +modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and +can become unstable at any point. +[WARNING]: No inventory was parsed, only implicit localhost is available +[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match +'all' + +PLAY [Test module] ***************************************************************************************************** + +TASK [Gathering Facts] ************************************************************************************************* +ok: [localhost] + +TASK [run on test123 - write] ******************************************************************************************* +ok: [localhost] + +PLAY RECAP ************************************************************************************************************* +localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 + +(venv) root@debian:~/my-ansible-6/ansible$ +``` + +--- + +### 7. Выйдите из виртуального окружения. + +Выход из окружения выполняется командой `deactivate` + +--- + +### 8. Инициализируйте новую collection my_own_namespace.yandex_cloud_elk + +Инициализация новой **collection** выполняется по шаблону `ansible-galaxy collection init <пространство>.<коллекция>`, где `<пространство>` - Пространство имён автора (название группы для всех **collection** автора), `<коллекция>` - название **collection** + +Создание своей **collection** (вместо `my_own_namespace.yandex_cloud_elk` используется `test.utils`): + +```console +root@debian:~/my-ansible-6$ ansible-galaxy collection init test.utils +[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are +modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and +can become unstable at any point. +- Collection test.utils was created successfully +root@debian:~/my-ansible-6$ +``` + +--- + +### 9. В данную collection перенесите свой module в соответствующую директорию. + +**Module** в **Ansible** это частный случай **plugin**, а значит и размещать его нужно внутри каталога **plugins**. +В соответствии с подсказкой **Ansible** (`plugins\readme.md`) модули должны размещаться в каталоге `plugins\modules`. + +--- + +### 10. Single task playbook преобразуйте в single task role и перенесите в collection. У role должны быть default всех параметров module + +Ссылка на **single task role** в **collection**: [file_content](https://github.com/NamorNinayzuk/ansible-collection/tree/main/roles/file_content) + +--- + +### 11. Создайте playbook для использования этой role. + +Ссылка на **playbook** в репозитории: [test_file_content](https://github.com/NamorNinayzuk/ansible-collection/blob/main/playbooks/test_file_content.yml) + +--- + +### 12. Заполните всю документацию по collection, выложите в свой репозиторий, поставьте тег `1.0.0` на этот коммит. + +Ссылка на репозиторий версии **1.0.0**: [Ansible-Collection](https://github.com/NamorNinayzuk/ansible-collection) + +--- + +### 13. Создайте архив .tar.gz этой collection в корневой директории collection. + +Архив **collection** создаётся командой `ansible-galaxy collection build` + +```console +root@debian:~/my-ansible-6/utils$ ansible-galaxy collection build +[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are +modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and +can become unstable at any point. +Created collection for test.utils at /home/my-ansible-6/utils/test-utils-1.0.0.tar.gz +root@debian:~/my-ansible-6/utils$ +``` + +--- + +### 14. Создайте ещё одну директорию любого наименования, перенесите туда single task playbook и архив c collection. + +```console +root@debian:~/my-ansible-6/final_test$ ls +test-utils-1.0.0.tar.gz test_file_content.yml +root@debian:~/my-ansible-6/final_test$ +``` + +--- + +### 15. Установите collection из локального архива + +Установка **collection** из архива выполняется командой `ansible-galaxy collection install <архив.tar.gz>` - где `<архив.tar.gz>` - файл **.tar.gz** архива **collection**, созданный **Ansible**. +Если запрашиваемая версия уже установлена, но требуется её заменить нужно добавить ключ: `--force` + +```console +root@debian:~/my-ansible-6/final_test$ ansible-galaxy collection install test-utils-1.0.0.tar.gz +[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are +modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and +can become unstable at any point. +Starting galaxy collection install process +Process install dependency map +Starting collection install process +Installing 'test.utils:1.0.0' to '/home/.ansible/collections/ansible_collections/utils' +test.utils:1.0.0 was installed successfully +root@debian:~/my-ansible-6/final_test$ +``` + +--- + +### 16. Запустите playbook, убедитесь, что он работает. + +```console +root@debian:~/my-ansible-6/final_test$ ansible-playbook test_file_content.yml +[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are +modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and +can become unstable at any point. +[WARNING]: No inventory was parsed, only implicit localhost is available +[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match +'all' + +PLAY [Test file content role] ****************************************************************************************** + +TASK [Gathering Facts] ************************************************************************************************* +ok: [localhost] + +TASK [test.utils.file_content : Compute file] ************************************************************************** +changed: [localhost] + +PLAY RECAP ************************************************************************************************************* +localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 + +root@debian:~/my-ansible-6/final_test$ ls +test-utils-1.0.0.tar.gz test_file test_file_content.yml +root@debian:~/my-ansible-6/final_test$ cat test_file && echo '' +sequence_of_data +root@debian:~/my-ansible-6/final_test$ +``` + +--- + +### 17. В ответ необходимо прислать ссылку на репозиторий с collection + +Репозиторий **collection** [Ansible-Collection](https://github.com/NamorNinayzuk/ansible-collection) + +--- ## Необязательная часть diff --git a/09-ci-01-intro/README.md b/09-ci-01-intro/README.md index 6751e9d77..ba6778132 100644 --- a/09-ci-01-intro/README.md +++ b/09-ci-01-intro/README.md @@ -25,10 +25,19 @@ Создать задачу с типом bug, попытаться провести его по всему workflow до Done. Создать задачу с типом epic, к ней привязать несколько задач с типом task, провести их по всему workflow до Done. При проведении обеих задач по статусам использовать kanban. Вернуть задачи в статус Open. Перейти в scrum, запланировать новый спринт, состоящий из задач эпика и одного бага, стартовать спринт, провести задачи до состояния Closed. Закрыть спринт. +--- +[kanban](https://github.com/NamorNinayzuk/mnt-homeworks/blob/MNT-video/09-ci-01-intro/Workflow%20Bugs.xml) +--- +--- +[scrum](https://github.com/NamorNinayzuk/mnt-homeworks/blob/MNT-video/09-ci-01-intro/Workflow%20Other%20tasks.xml) +--- + Если всё отработало в рамках ожидания - выгрузить схемы workflow для импорта в XML. Файлы с workflow приложить к решению задания. --- +--- + ### Как оформить ДЗ? Выполненное домашнее задание пришлите ссылкой на .md-файл в вашем репозитории. diff --git a/09-ci-01-intro/Workflow Bugs.xml b/09-ci-01-intro/Workflow Bugs.xml new file mode 100644 index 000000000..8c8b982f4 --- /dev/null +++ b/09-ci-01-intro/Workflow Bugs.xml @@ -0,0 +1,314 @@ + + + + Workflow for Bugs + 5d4bf61bfbe7c30cedfc5de0 + 5d4bf61bfbe7c30cedfc5de0 + 1673402956566 + + + + + common.forms.create + 0 + + + com.atlassian.jira.workflow.validator.PermissionValidator + Create Issue + + + + + + + com.atlassian.jira.workflow.function.issue.IssueCreateFunction + + + com.atlassian.jira.workflow.function.event.FireIssueEventFunction + 1 + + + + + + + + + + + closeissue.desc + closeissue.close + closeissue.title + 60 + + + + com.atlassian.jira.workflow.condition.PermissionCondition + Resolve Issue + + + com.atlassian.jira.workflow.condition.PermissionCondition + Close Issue + + + + + + + + com.atlassian.jira.workflow.function.issue.UpdateIssueStatusFunction + + + com.atlassian.jira.workflow.function.misc.CreateCommentFunction + + + com.atlassian.jira.workflow.function.issue.GenerateChangeHistoryFunction + + + com.atlassian.jira.workflow.function.issue.IssueReindexFunction + + + com.atlassian.jira.workflow.function.event.FireIssueEventFunction + 5 + + + + + + + + + startprogress.title + 20 + + + + com.atlassian.jira.workflow.condition.PermissionCondition + assignable + + + + + + + + com.atlassian.jira.workflow.function.issue.UpdateIssueFieldFunction + resolution + + + + com.atlassian.jira.workflow.function.issue.UpdateIssueStatusFunction + + + com.atlassian.jira.workflow.function.issue.AssignToCurrentUserFunction + com.atlassian.jira.plugin.system.workflowassigntocurrentuser-function + + + com.atlassian.jira.workflow.function.misc.CreateCommentFunction + + + com.atlassian.jira.workflow.function.issue.GenerateChangeHistoryFunction + + + com.atlassian.jira.workflow.function.issue.IssueReindexFunction + + + com.atlassian.jira.workflow.function.event.FireIssueEventFunction + 11 + + + + + + + + + 1 + + + + + + 3 + + + + + + + + + + + com.atlassian.jira.workflow.function.issue.UpdateIssueStatusFunction + + + com.atlassian.jira.workflow.function.misc.CreateCommentFunction + + + com.atlassian.jira.workflow.function.issue.GenerateChangeHistoryFunction + + + com.atlassian.jira.workflow.function.issue.IssueReindexFunction + + + com.atlassian.jira.workflow.function.event.FireIssueEventFunction + 13 + + + + + + + + + false + 6 + + + 10035 + + + + + + + + + com.atlassian.jira.workflow.function.issue.UpdateIssueStatusFunction + + + com.atlassian.jira.workflow.function.misc.CreateCommentFunction + + + com.atlassian.jira.workflow.function.issue.GenerateChangeHistoryFunction + + + com.atlassian.jira.workflow.function.issue.IssueReindexFunction + + + com.atlassian.jira.workflow.function.event.FireIssueEventFunction + 13 + + + + + + + + + 10045 + + + + + + + + + com.atlassian.jira.workflow.function.issue.UpdateIssueStatusFunction + + + com.atlassian.jira.workflow.function.misc.CreateCommentFunction + + + com.atlassian.jira.workflow.function.issue.GenerateChangeHistoryFunction + + + com.atlassian.jira.workflow.function.issue.IssueReindexFunction + + + com.atlassian.jira.workflow.function.event.FireIssueEventFunction + 13 + + + + + + + + + + + + + com.atlassian.jira.workflow.function.issue.UpdateIssueStatusFunction + + + com.atlassian.jira.workflow.function.misc.CreateCommentFunction + + + com.atlassian.jira.workflow.function.issue.GenerateChangeHistoryFunction + + + com.atlassian.jira.workflow.function.issue.IssueReindexFunction + + + com.atlassian.jira.workflow.function.event.FireIssueEventFunction + 13 + + + + + + + + + 10046 + + + + + + + + + com.atlassian.jira.workflow.function.issue.UpdateIssueStatusFunction + + + com.atlassian.jira.workflow.function.misc.CreateCommentFunction + + + com.atlassian.jira.workflow.function.issue.GenerateChangeHistoryFunction + + + com.atlassian.jira.workflow.function.issue.IssueReindexFunction + + + com.atlassian.jira.workflow.function.event.FireIssueEventFunction + 13 + + + + + + + + + + + + + com.atlassian.jira.workflow.function.issue.UpdateIssueStatusFunction + + + com.atlassian.jira.workflow.function.misc.CreateCommentFunction + + + com.atlassian.jira.workflow.function.issue.GenerateChangeHistoryFunction + + + com.atlassian.jira.workflow.function.issue.IssueReindexFunction + + + com.atlassian.jira.workflow.function.event.FireIssueEventFunction + 13 + + + + + + + + + diff --git a/09-ci-01-intro/Workflow Other tasks.xml b/09-ci-01-intro/Workflow Other tasks.xml new file mode 100644 index 000000000..377d9708f --- /dev/null +++ b/09-ci-01-intro/Workflow Other tasks.xml @@ -0,0 +1,374 @@ + + + + Workflow for Other Tasks + 5d4bf61bfbe7c30cedfc5de0 + 5d4bf61bfbe7c30cedfc5de0 + 1673401785250 + + + common.forms.create + 0 + + + com.atlassian.jira.workflow.validator.PermissionValidator + Create Issue + + + + + + + com.atlassian.jira.workflow.function.issue.IssueCreateFunction + + + com.atlassian.jira.workflow.function.event.FireIssueEventFunction + 1 + + + + + + + + + + + closeissue.desc + closeissue.close + closeissue.title + 60 + + + + com.atlassian.jira.workflow.condition.PermissionCondition + Resolve Issue + + + com.atlassian.jira.workflow.condition.PermissionCondition + Close Issue + + + + + + + + com.atlassian.jira.workflow.function.issue.UpdateIssueStatusFunction + + + com.atlassian.jira.workflow.function.misc.CreateCommentFunction + + + com.atlassian.jira.workflow.function.issue.GenerateChangeHistoryFunction + + + com.atlassian.jira.workflow.function.issue.IssueReindexFunction + + + com.atlassian.jira.workflow.function.event.FireIssueEventFunction + 5 + + + + + + + + + startprogress.title + 20 + + + + com.atlassian.jira.workflow.condition.PermissionCondition + assignable + + + + + + + + com.atlassian.jira.workflow.function.issue.UpdateIssueFieldFunction + resolution + + + + com.atlassian.jira.workflow.function.issue.UpdateIssueStatusFunction + + + com.atlassian.jira.workflow.function.issue.AssignToCurrentUserFunction + com.atlassian.jira.plugin.system.workflowassigntocurrentuser-function + + + com.atlassian.jira.workflow.function.misc.CreateCommentFunction + + + com.atlassian.jira.workflow.function.issue.GenerateChangeHistoryFunction + + + com.atlassian.jira.workflow.function.issue.IssueReindexFunction + + + com.atlassian.jira.workflow.function.event.FireIssueEventFunction + 11 + + + + + + + + + 1 + + + + + + 3 + + + + + + + + + + + com.atlassian.jira.workflow.function.issue.UpdateIssueStatusFunction + + + com.atlassian.jira.workflow.function.misc.CreateCommentFunction + + + com.atlassian.jira.workflow.function.issue.GenerateChangeHistoryFunction + + + com.atlassian.jira.workflow.function.issue.IssueReindexFunction + + + com.atlassian.jira.workflow.function.event.FireIssueEventFunction + 13 + + + + + + + + + false + 6 + + + 10035 + + + + + + + + + com.atlassian.jira.workflow.function.issue.UpdateIssueStatusFunction + + + com.atlassian.jira.workflow.function.misc.CreateCommentFunction + + + com.atlassian.jira.workflow.function.issue.GenerateChangeHistoryFunction + + + com.atlassian.jira.workflow.function.issue.IssueReindexFunction + + + com.atlassian.jira.workflow.function.event.FireIssueEventFunction + 13 + + + + + + + + + 10040 + + + + + + + + + com.atlassian.jira.workflow.function.issue.UpdateIssueStatusFunction + + + com.atlassian.jira.workflow.function.misc.CreateCommentFunction + + + com.atlassian.jira.workflow.function.issue.GenerateChangeHistoryFunction + + + com.atlassian.jira.workflow.function.issue.IssueReindexFunction + + + com.atlassian.jira.workflow.function.event.FireIssueEventFunction + 13 + + + + + + + + + + + + + com.atlassian.jira.workflow.function.issue.UpdateIssueStatusFunction + + + com.atlassian.jira.workflow.function.misc.CreateCommentFunction + + + com.atlassian.jira.workflow.function.issue.GenerateChangeHistoryFunction + + + com.atlassian.jira.workflow.function.issue.IssueReindexFunction + + + com.atlassian.jira.workflow.function.event.FireIssueEventFunction + 13 + + + + + + + + + 10039 + + + + + + + + + com.atlassian.jira.workflow.function.issue.UpdateIssueStatusFunction + + + com.atlassian.jira.workflow.function.misc.CreateCommentFunction + + + com.atlassian.jira.workflow.function.issue.GenerateChangeHistoryFunction + + + com.atlassian.jira.workflow.function.issue.IssueReindexFunction + + + com.atlassian.jira.workflow.function.event.FireIssueEventFunction + 13 + + + + + + + + + 10045 + + + + + + + + + com.atlassian.jira.workflow.function.issue.UpdateIssueStatusFunction + + + com.atlassian.jira.workflow.function.misc.CreateCommentFunction + + + com.atlassian.jira.workflow.function.issue.GenerateChangeHistoryFunction + + + com.atlassian.jira.workflow.function.issue.IssueReindexFunction + + + com.atlassian.jira.workflow.function.event.FireIssueEventFunction + 13 + + + + + + + + + + + + + com.atlassian.jira.workflow.function.issue.UpdateIssueStatusFunction + + + com.atlassian.jira.workflow.function.misc.CreateCommentFunction + + + com.atlassian.jira.workflow.function.issue.GenerateChangeHistoryFunction + + + com.atlassian.jira.workflow.function.issue.IssueReindexFunction + + + com.atlassian.jira.workflow.function.event.FireIssueEventFunction + 13 + + + + + + + + + 10046 + + + + + + + + + com.atlassian.jira.workflow.function.issue.UpdateIssueStatusFunction + + + com.atlassian.jira.workflow.function.misc.CreateCommentFunction + + + com.atlassian.jira.workflow.function.issue.GenerateChangeHistoryFunction + + + com.atlassian.jira.workflow.function.issue.IssueReindexFunction + + + com.atlassian.jira.workflow.function.event.FireIssueEventFunction + 13 + + + + + + + + + diff --git a/09-ci-03-cicd/README.md b/09-ci-03-cicd/README.md index 143ce30db..e8a4b0561 100644 --- a/09-ci-03-cicd/README.md +++ b/09-ci-03-cicd/README.md @@ -1,65 +1,404 @@ -# Домашнее задание к занятию "9.Процессы CI/CD" +# Домашнее задание по лекции "9.3 Процессы CI\CD" ## Подготовка к выполнению -1. Создаём 2 VM в yandex cloud со следующими параметрами: 2CPU 4RAM Centos7(остальное по минимальным требованиям) -2. Прописываем в [inventory](./infrastructure/inventory/cicd/hosts.yml) [playbook'a](./infrastructure/site.yml) созданные хосты -3. Добавляем в [files](./infrastructure/files/) файл со своим публичным ключом (id_rsa.pub). Если ключ называется иначе - найдите таску в плейбуке, которая использует id_rsa.pub имя и исправьте на своё -4. Запускаем playbook, ожидаем успешного завершения -5. Проверяем готовность Sonarqube через [браузер](http://localhost:9000) -6. Заходим под admin\admin, меняем пароль на свой -7. Проверяем готовность Nexus через [бразуер](http://localhost:8081) -8. Подключаемся под admin\admin123, меняем пароль, сохраняем анонимный доступ +> 1. Создаём 2 VM в yandex cloud со следующими параметрами: 2CPU 4RAM Centos7(остальное по минимальным требованиям) +> 2. Прописываем в [inventory](./infrastructure/inventory/cicd/hosts.yml) [playbook'a](./infrastructure/site.yml) созданные хосты +> 3. Добавляем в [files](./infrastructure/files/) файл со своим публичным ключом (id_rsa.pub). Если ключ называется иначе - найдите таску в плейбуке, которая использует id_rsa.pub имя и исправьте на своё +> 4. Запускаем playbook, ожидаем успешного завершения +> 5. Проверяем готовность Sonarqube через [браузер](http://localhost:9000) +> 6. Заходим под admin\admin, меняем пароль на свой +> 7. Проверяем готовность Nexus через [бразуер](http://localhost:8081) +> 8. Подключаемся под admin\admin123, меняем пароль, сохраняем анонимный доступ + +Облачные машины созданы при помощи **terraform** со следующим [конфигурационным файлом](infrastructure/main.tf). + +Усечённый вывод **terraform**: + +```console +... +Changes to Outputs: + + nexus_ip = (known after apply) + + sonar_ip = (known after apply) +yandex_vpc_network.my-net: Creating... +yandex_compute_image.centos-7: Creating... +yandex_vpc_network.my-net: Creation complete after 1s [id=enpqhvvlh3fcaeiok3ve] +yandex_vpc_subnet.my-subnet: Creating... +yandex_vpc_subnet.my-subnet: Creation complete after 1s [id=e9bthsk1hvrcni3kn3k9] +yandex_compute_image.centos-7: Creation complete after 7s [id=fd80n8bdihrnuj2clid4] +yandex_compute_instance.vm["nexus"]: Creating... +yandex_compute_instance.vm["sonar"]: Creating... +yandex_compute_instance.vm["sonar"]: Still creating... [10s elapsed] +yandex_compute_instance.vm["nexus"]: Still creating... [10s elapsed] +yandex_compute_instance.vm["nexus"]: Still creating... [20s elapsed] +yandex_compute_instance.vm["sonar"]: Still creating... [20s elapsed] +yandex_compute_instance.vm["sonar"]: Still creating... [30s elapsed] +yandex_compute_instance.vm["nexus"]: Still creating... [30s elapsed] +yandex_compute_instance.vm["nexus"]: Still creating... [40s elapsed] +yandex_compute_instance.vm["sonar"]: Still creating... [40s elapsed] +yandex_compute_instance.vm["nexus"]: Creation complete after 42s [id=fhm2681f29hakbal2fbi] +yandex_compute_instance.vm["sonar"]: Creation complete after 42s [id=fhmu72j5klsu30sqj9mp] + +Apply complete! Resources: 5 added, 0 changed, 0 destroyed. + +Outputs: + +nexus_ip = "51.250.17.146" +sonar_ip = "51.250.3.1" +┌──(kali㉿kali)-[~/09-ci-03-cicd/infrastructure] +└─$ +``` + +Вместо копирования `id_rsa.pub` внесены изменения в соответствующую **task**: + +```yaml + - name: "Set up ssh key to access for managed node" + authorized_key: + user: "{{ sonarqube_db_user }}" + state: present + key: "{{ lookup('file', '~/.ssh/id_ed25519.pub') }}" +``` + +Проверка функционирования окружения (часть вывода **Ansible** пропущена): + +```console +... +PLAY RECAP ************************************************************************************************************* +nexus-01 : ok=17 changed=15 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 +sonar-01 : ok=35 changed=27 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 + +┌──(kali㉿kali)-[~/09-ci-03-cicd/infrastructure] +└─$ ping 51.250.3.1 -p 9000 +PATTERN: 0x9000 +PING 51.250.3.1 (51.250.3.1) 56(84) bytes of data. +64 bytes from 51.250.3.1: icmp_seq=1 ttl=50 time=17.3 ms +64 bytes from 51.250.3.1: icmp_seq=2 ttl=50 time=16.3 ms +64 bytes from 51.250.3.1: icmp_seq=3 ttl=50 time=16.4 ms +64 bytes from 51.250.3.1: icmp_seq=4 ttl=50 time=16.2 ms +64 bytes from 51.250.3.1: icmp_seq=5 ttl=50 time=16.1 ms +^C +--- 51.250.3.1 ping statistics --- +5 packets transmitted, 5 received, 0% packet loss, time 4043ms +rtt min/avg/max/mdev = 16.085/16.462/17.306/0.435 ms +┌──(kali㉿kali)-[~/09-ci-03-cicd/infrastructure] +└─$ ping 51.250.17.146 -p 8081 +PATTERN: 0x8081 +PING 51.250.17.146 (51.250.17.146) 56(84) bytes of data. +64 bytes from 51.250.17.146: icmp_seq=1 ttl=51 time=14.9 ms +64 bytes from 51.250.17.146: icmp_seq=2 ttl=51 time=14.8 ms +64 bytes from 51.250.17.146: icmp_seq=3 ttl=51 time=14.7 ms +64 bytes from 51.250.17.146: icmp_seq=4 ttl=51 time=14.3 ms +64 bytes from 51.250.17.146: icmp_seq=5 ttl=51 time=14.7 ms +^C +--- 51.250.17.146 ping statistics --- +5 packets transmitted, 5 received, 0% packet loss, time 4029ms +rtt min/avg/max/mdev = 14.334/14.684/14.915/0.191 ms +┌──(kali㉿kali)-[~/09-ci-03-cicd/infrastructure] +└─$ +``` ## Знакомоство с SonarQube ### Основная часть -1. Создаём новый проект, название произвольное -2. Скачиваем пакет sonar-scanner, который нам предлагает скачать сам sonarqube -3. Делаем так, чтобы binary был доступен через вызов в shell (или меняем переменную PATH или любой другой удобный вам способ) -4. Проверяем `sonar-scanner --version` -5. Запускаем анализатор против кода из директории [example](./example) с дополнительным ключом `-Dsonar.coverage.exclusions=fail.py` -6. Смотрим результат в интерфейсе -7. Исправляем ошибки, которые он выявил(включая warnings) -8. Запускаем анализатор повторно - проверяем, что QG пройдены успешно -9. Делаем скриншот успешного прохождения анализа, прикладываем к решению ДЗ +> 1. Создаём новый проект, название произвольное +> 2. Скачиваем пакет sonar-scanner, который нам предлагает скачать сам sonarqube +> 3. Делаем так, чтобы binary был доступен через вызов в shell (или меняем переменную PATH или любой другой удобный вам способ) +> 4. Проверяем `sonar-scanner --version` +> 5. Запускаем анализатор против кода из директории [example](./example) с дополнительным ключом `-Dsonar.coverage.exclusions=fail.py` +> 6. Смотрим результат в интерфейсе +> 7. Исправляем ошибки, которые он выявил(включая warnings) +> 8. Запускаем анализатор повторно - проверяем, что QG пройдены успешно +> 9. Делаем скриншот успешного прохождения анализа, прикладываем к решению ДЗ + +Проверка функционирования **sonar-scanner**: + +```console +┌──(kali㉿kali)-[~/09-ci-03-cicd] +└─$ sonar-scanner --version +INFO: Scanner configuration file: /home/sa/.local/conf/sonar-scanner.properties +INFO: Project root configuration file: NONE +INFO: SonarScanner 4.7.0.2747 +INFO: Java 11.0.14.1 Eclipse Adoptium (64-bit) +INFO: Linux 5.18.0-3-amd64 amd64 +┌──(kali㉿kali)-[~/09-ci-03-cicd] +└─$ +``` + +Готовая команда запуска анализатора кода: + +```console +sonar-scanner -Dsonar.projectKey=netology -Dsonar.sources=. -Dsonar.host.url=http://51.250.3.1:9000 -Dsonar.login=543945fc60861f017e8d81c8d6557c971dd4fa9b -Dsonar.coverage.exclusions=fail.py +``` + +Пример запуска анализатора кода: + +```console +┌──(kali㉿kali)-[~/09-ci-03-cicd/example] +└─$ sonar-scanner -Dsonar.projectKey=netology -Dsonar.sources=. -Dsonar.host.url=http://51.250.3.1:9000 -Dsonar.login=543945fc60861f017e8d81c8d6557c971dd4fa9b -Dsonar.coverage.exclusions=fail.py +INFO: Scanner configuration file: /home/sa/.local/conf/sonar-scanner.properties +INFO: Project root configuration file: NONE +INFO: SonarScanner 4.7.0.2747 +INFO: Java 11.0.14.1 Eclipse Adoptium (64-bit) +INFO: Linux 5.18.0-3-amd64 amd64 +INFO: User cache: /home/sa/.sonar/cache +INFO: Scanner configuration file: /home/sa/.local/conf/sonar-scanner.properties +INFO: Project root configuration file: NONE +INFO: Analyzing on SonarQube server 9.1.0 +INFO: Default locale: "ru_RU", source code encoding: "UTF-8" (analysis is platform dependent) +INFO: Load global settings +INFO: Load global settings (done) | time=119ms +INFO: Server id: 9CFC3560-AYK7Tf0Hvo_99qKV86b9 +INFO: User cache: /home/sa/.sonar/cache +INFO: Load/download plugins +INFO: Load plugins index +INFO: Load plugins index (done) | time=65ms +INFO: Load/download plugins (done) | time=139ms +INFO: Process project properties +INFO: Process project properties (done) | time=5ms +INFO: Execute project builders +INFO: Execute project builders (done) | time=1ms +INFO: Project key: netology +INFO: Base dir: /home/sa/ci-cd/example +INFO: Working dir: /home/sa/ci-cd/example/.scannerwork +INFO: Load project settings for component key: 'netology' +INFO: Load project settings for component key: 'netology' (done) | time=91ms +INFO: Load quality profiles +INFO: Load quality profiles (done) | time=132ms +INFO: Load active rules +INFO: Load active rules (done) | time=3592ms +WARN: SCM provider autodetection failed. Please use "sonar.scm.provider" to define SCM of your project, or disable the SCM Sensor in the project settings. +INFO: Indexing files... +INFO: Project configuration: +INFO: Excluded sources for coverage: fail.py +INFO: 1 file indexed +INFO: Quality profile for py: Sonar way +INFO: ------------- Run sensors on module netology +INFO: Load metrics repository +INFO: Load metrics repository (done) | time=52ms +INFO: Sensor Python Sensor [python] +WARN: Your code is analyzed as compatible with python 2 and 3 by default. This will prevent the detection of issues specific to python 2 or python 3. You can get a more precise analysis by setting a python version in your configuration via the parameter "sonar.python.version" +INFO: Starting global symbols computation +INFO: 1 source file to be analyzed +INFO: Load project repositories +INFO: Load project repositories (done) | time=35ms +INFO: 1/1 source file has been analyzed +INFO: Starting rules execution +INFO: 1 source file to be analyzed +INFO: 1/1 source file has been analyzed +INFO: Sensor Python Sensor [python] (done) | time=359ms +INFO: Sensor Cobertura Sensor for Python coverage [python] +INFO: Sensor Cobertura Sensor for Python coverage [python] (done) | time=8ms +INFO: Sensor PythonXUnitSensor [python] +INFO: Sensor PythonXUnitSensor [python] (done) | time=0ms +INFO: Sensor CSS Rules [cssfamily] +INFO: No CSS, PHP, HTML or VueJS files are found in the project. CSS analysis is skipped. +INFO: Sensor CSS Rules [cssfamily] (done) | time=0ms +INFO: Sensor JaCoCo XML Report Importer [jacoco] +INFO: 'sonar.coverage.jacoco.xmlReportPaths' is not defined. Using default locations: target/site/jacoco/jacoco.xml,target/site/jacoco-it/jacoco.xml,build/reports/jacoco/test/jacocoTestReport.xml +INFO: No report imported, no coverage information will be imported by JaCoCo XML Report Importer +INFO: Sensor JaCoCo XML Report Importer [jacoco] (done) | time=1ms +INFO: Sensor C# Project Type Information [csharp] +INFO: Sensor C# Project Type Information [csharp] (done) | time=1ms +INFO: Sensor C# Analysis Log [csharp] +INFO: Sensor C# Analysis Log [csharp] (done) | time=9ms +INFO: Sensor C# Properties [csharp] +INFO: Sensor C# Properties [csharp] (done) | time=0ms +INFO: Sensor JavaXmlSensor [java] +INFO: Sensor JavaXmlSensor [java] (done) | time=1ms +INFO: Sensor HTML [web] +INFO: Sensor HTML [web] (done) | time=1ms +INFO: Sensor VB.NET Project Type Information [vbnet] +INFO: Sensor VB.NET Project Type Information [vbnet] (done) | time=0ms +INFO: Sensor VB.NET Analysis Log [vbnet] +INFO: Sensor VB.NET Analysis Log [vbnet] (done) | time=9ms +INFO: Sensor VB.NET Properties [vbnet] +INFO: Sensor VB.NET Properties [vbnet] (done) | time=0ms +INFO: ------------- Run sensors on project +INFO: Sensor Zero Coverage Sensor +INFO: Sensor Zero Coverage Sensor (done) | time=1ms +INFO: SCM Publisher No SCM system was detected. You can use the 'sonar.scm.provider' property to explicitly specify it. +INFO: CPD Executor Calculating CPD for 1 file +INFO: CPD Executor CPD calculation finished (done) | time=8ms +INFO: Analysis report generated in 50ms, dir size=102,7 kB +INFO: Analysis report compressed in 15ms, zip size=13,7 kB +INFO: Analysis report uploaded in 41ms +INFO: ANALYSIS SUCCESSFUL, you can browse http://51.250.3.1:9000/dashboard?id=netology +INFO: Note that you will be able to access the updated dashboard once the server has processed the submitted analysis report +INFO: More about the report processing at http://51.250.3.1:9000/api/ce/task?id=AYK7doDcvo_99qKV8_hF +INFO: Analysis total time: 5.586 s +INFO: ------------------------------------------------------------------------ +INFO: EXECUTION SUCCESS +INFO: ------------------------------------------------------------------------ +INFO: Total time: 6.544s +INFO: Final Memory: 16M/57M +INFO: ------------------------------------------------------------------------ +┌──(kali㉿kali)-[~/09-ci-03-cicd/example] +└─$ +``` + +![sonarqube](images/sonarqube.png) ## Знакомство с Nexus ### Основная часть -1. В репозиторий `maven-public` загружаем артефакт с GAV параметрами: - 1. groupId: netology - 2. artifactId: java - 3. version: 8_282 - 4. classifier: distrib - 5. type: tar.gz -2. В него же загружаем такой же артефакт, но с version: 8_102 -3. Проверяем, что все файлы загрузились успешно -4. В ответе присылаем файл `maven-metadata.xml` для этого артефекта +> 1. В репозиторий `maven-releases` загружаем артефакт с GAV параметрами: +> 1. groupId: netology +> 2. artifactId: java +> 3. version: 8_282 +> 4. classifier: distrib +> 5. type: tar.gz +> 2. В него же загружаем такой же артефакт, но с version: 8_102 +> 3. Проверяем, что все файлы загрузились успешно +> 4. В ответе присылаем файл `maven-metadata.xml` для этого артефекта + +![nexus](images/nexus.png) + +Метаданные артефакта: [maven-metadata.xml](mvn/maven-metadata.xml) + +```xml + + + netology + java + + 8_282 + 8_282 + + 8_102 + 8_282 + + 20230525140514 + + +``` ### Знакомство с Maven ### Подготовка к выполнению -1. Скачиваем дистрибутив с [maven](https://maven.apache.org/download.cgi) -2. Разархивируем, делаем так, чтобы binary был доступен через вызов в shell (или меняем переменную PATH или любой другой удобный вам способ) -3. Удаляем из `apache-maven-/conf/settings.xml` упоминание о правиле, отвергающем http соединение( раздел mirrors->id: my-repository-http-unblocker) -4. Проверяем `mvn --version` -5. Забираем директорию [mvn](./mvn) с pom +> 1. Скачиваем дистрибутив с [maven](https://maven.apache.org/download.cgi) +> 2. Разархивируем, делаем так, чтобы binary был доступен через вызов в shell (или меняем переменную PATH или любой другой удобный вам способ) +> 3. Удаляем из `apache-maven-/conf/settings.xml` упоминание о правиле, отвергающем http соединение( раздел mirrors->id: my-repository-http-blocker) +> 4. Проверяем `mvn --version` +> 5. Забираем директорию [mvn](./mvn) с pom + +Проверка функционирования **Maven**: + +```console +┌──(kali㉿kali)-[~/09-ci-03-cicd] +└─$ mvn --version +Apache Maven 3.8.6 (84538c9988a25aec085021c365c560670ad80f63) +Maven home: /home/sa/.local +Java version: 11.0.16, vendor: Debian, runtime: /usr/lib/jvm/java-11-openjdk-amd64 +Default locale: ru_RU, platform encoding: UTF-8 +OS name: "linux", version: "5.18.0-3-amd64", arch: "amd64", family: "unix" +┌──(kali㉿kali)-[~/09-ci-03-cicd] +└─$ +``` ### Основная часть -1. Меняем в `pom.xml` блок с зависимостями под наш артефакт из первого пункта задания для Nexus (java с версией 8_282) -2. Запускаем команду `mvn package` в директории с `pom.xml`, ожидаем успешного окончания -3. Проверяем директорию `~/.m2/repository/`, находим наш артефакт -4. В ответе присылаем исправленный файл `pom.xml` +> 1. Меняем в `pom.xml` блок с зависимостями под наш артефакт из первого пункта задания для Nexus (java с версией 8_282) +> 2. Запускаем команду `mvn package` в директории с `pom.xml`, ожидаем успешного окончания +> 3. Проверяем директорию `~/.m2/repository/`, находим наш артефакт +> 4. В ответе присылаем исправленный файл `pom.xml` + +Итоговый файл [pom.xml](mvn/pom.xml): + +```xml + + 4.0.0 + + com.netology.app + simple-app + 1.0-SNAPSHOT + + + my-repo + maven-releases + http://51.250.17.146:8081/repository/maven-releases/ + + + + + netology + java + 8_282 + distrib + tar.gz + + + +``` + +Загрузка зависимостей (артефакта) и сборка дистрибутива: ---- +```console +┌──(kali㉿kali)-[~/09-ci-03-cicd/mvn] +└─$ mvn package +[INFO] Scanning for projects... +[INFO] +[INFO] --------------------< com.netology.app:simple-app >--------------------- +[INFO] Building simple-app 1.0-SNAPSHOT +[INFO] --------------------------------[ jar ]--------------------------------- +[WARNING] The POM for netology:java:tar.gz:distrib:8_282 is missing, no dependency information available +[INFO] +[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ simple-app --- +[WARNING] Using platform encoding (UTF-8 actually) to copy filtered resources, i.e. build is platform dependent! +[INFO] skip non existing resourceDirectory /home/sa/ci-cd/mvn/src/main/resources +[INFO] +[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ simple-app --- +[INFO] No sources to compile +[INFO] +[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ simple-app --- +[WARNING] Using platform encoding (UTF-8 actually) to copy filtered resources, i.e. build is platform dependent! +[INFO] skip non existing resourceDirectory /home/sa/ci-cd/mvn/src/test/resources +[INFO] +[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ simple-app --- +[INFO] No sources to compile +[INFO] +[INFO] --- maven-surefire-plugin:2.12.4:test (default-test) @ simple-app --- +[INFO] No tests to run. +[INFO] +[INFO] --- maven-jar-plugin:2.4:jar (default-jar) @ simple-app --- +[WARNING] JAR will be empty - no content was marked for inclusion! +[INFO] Building jar: /home/sa/ci-cd/mvn/target/simple-app-1.0-SNAPSHOT.jar +[INFO] ------------------------------------------------------------------------ +[INFO] BUILD SUCCESS +[INFO] ------------------------------------------------------------------------ +[INFO] Total time: 0.588 s +[INFO] Finished at: 2023-05-25T19:10:17+04:00 +[INFO] ------------------------------------------------------------------------ +┌──(kali㉿kali)-[~/09-ci-03-cicd/mvn] +└─$ +``` -### Как оформить ДЗ? +Во время сборки **Maven** должен загрузить необходимые зависимости (если их нет) в каталог +`~/.m2/repository/<группа>/<артефакт>/<версия>`, где `<группа>` - группа артефакта зависимости = **netology**, `<артефакт>` - идентификатор артефакта зависимости = **java**, `<версия>` - требуемая версия = **8_282**. +Таким образом должны появиться файлы в каталоге `~/.m2/repository/netology/java/8_282/`. +При сборке в каталоге запуска рядом с `pom.xml` должен появиться каталог `target` с дистрибутивом, имя которого будет начинаться с `<артефакт>-<версия>`, где `<артефакт>` - название артефакта = **simple-app**, `<версия>` - версия = **1.0-SNAPSHOT**. -Выполненное домашнее задание пришлите ссылкой на .md-файл в вашем репозитории. +Проверка загрузки зависимостей и сборки дистрибутива: ---- +```console +┌──(kali㉿kali)-[~/09-ci-03-cicd/mvn] +└─$ ls target/ +maven-archiver simple-app-1.0-SNAPSHOT.jar +┌──(kali㉿kali)-[~/09-ci-03-cicd/mvn] +└─$ ls ~/.m2/repository/ +backport-util-concurrent classworlds com commons-cli commons-lang commons-logging junit log4j netology org +┌──(kali㉿kali)-[~/09-ci-03-cicd/mvn] +└─$ ls ~/.m2/repository/netology/ +java +┌──(kali㉿kali)-[~/09-ci-03-cicd/mvn] +└─$ ls ~/.m2/repository/netology/java/ +8_282 +┌──(kali㉿kali)-[~/09-ci-03-cicd/mvn] +└─$ ls ~/.m2/repository/netology/java/8_282/ +java-8_282-distrib.tar.gz java-8_282-distrib.tar.gz.sha1 java-8_282.pom.lastUpdated _remote.repositories +┌──(kali㉿kali)-[~/09-ci-03-cicd/mvn] +└─$ +``` diff --git a/09-ci-03-cicd/images/1 b/09-ci-03-cicd/images/1 new file mode 100644 index 000000000..8b1378917 --- /dev/null +++ b/09-ci-03-cicd/images/1 @@ -0,0 +1 @@ + diff --git a/09-ci-03-cicd/images/nexus.png b/09-ci-03-cicd/images/nexus.png new file mode 100644 index 000000000..1240d29a1 Binary files /dev/null and b/09-ci-03-cicd/images/nexus.png differ diff --git a/09-ci-03-cicd/images/sonarqube.png b/09-ci-03-cicd/images/sonarqube.png new file mode 100644 index 000000000..a556cd7ed Binary files /dev/null and b/09-ci-03-cicd/images/sonarqube.png differ diff --git a/09-ci-03-cicd/infrastructure/main.tf b/09-ci-03-cicd/infrastructure/main.tf new file mode 100644 index 000000000..3ab177cd3 --- /dev/null +++ b/09-ci-03-cicd/infrastructure/main.tf @@ -0,0 +1,80 @@ +variable "YC_CLOUD_ID" { default = "" } +variable "YC_FOLDER_ID" { default = "" } +variable "YC_ZONE" { default = "" } + +terraform { + required_providers { + yandex = { + source = "yandex-cloud/yandex" + } + } + required_version = ">= 0.13" +} + +provider "yandex" { + service_account_key_file = "/home/sa/.ssh/yc-key.json" + cloud_id = var.YC_CLOUD_ID + folder_id = var.YC_FOLDER_ID + zone = var.YC_ZONE +} + +resource "yandex_compute_image" "centos-7" { + name = "centos-7" + source_family = "centos-7" +} + +resource "yandex_compute_instance" "vm" { + for_each = local.vm_nodes + + name = "${each.key}" + description = "Node for ${each.key}" + + platform_id = "standard-v1" + + resources { + cores = 2 + memory = 4 + core_fraction = 5 + } + + boot_disk { + initialize_params { + image_id = yandex_compute_image.centos-7.id + type = "network-hdd" + size = 10 + } + } + + network_interface { + subnet_id = "${yandex_vpc_subnet.my-subnet.id}" + nat = true + } + + metadata = { + ssh-keys = "centos:${file("~/.ssh/id_ed25519.pub")}" + } +} + +resource "yandex_vpc_network" "my-net" { + name = "vm-network" +} + +resource "yandex_vpc_subnet" "my-subnet" { + name = "cluster-subnet" + v4_cidr_blocks = ["10.2.0.0/16"] + zone = var.YC_ZONE + network_id = yandex_vpc_network.my-net.id + depends_on = [yandex_vpc_network.my-net] +} + +output "sonar_ip" { + value = yandex_compute_instance.vm["sonar"].network_interface.0.nat_ip_address +} + +output "nexus_ip" { + value = yandex_compute_instance.vm["nexus"].network_interface.0.nat_ip_address +} + +locals { + vm_nodes = toset(["sonar", "nexus"]) +} diff --git a/09-ci-03-cicd/infrastructure/site.yml b/09-ci-03-cicd/infrastructure/site.yml index 49d31b3bb..16ec576ee 100644 --- a/09-ci-03-cicd/infrastructure/site.yml +++ b/09-ci-03-cicd/infrastructure/site.yml @@ -117,8 +117,9 @@ authorized_key: user: "{{ sonarqube_db_user }}" state: present - key: "{{ lookup('file', 'id_rsa.pub') }}" - + #key: "{{ lookup('file', 'id_rsa.pub') }}" + key: "{{ lookup('file', '~/.ssh/id_ed25519.pub') }}" + - name: "Allow group to have passwordless sudo" lineinfile: dest: /etc/sudoers diff --git a/09-ci-04-jenkins/README.md b/09-ci-04-jenkins/README.md index 3041bbf55..2fddfb831 100644 --- a/09-ci-04-jenkins/README.md +++ b/09-ci-04-jenkins/README.md @@ -1,32 +1,745 @@ -# Домашнее задание к занятию "10.Jenkins" +# Домашнее задание по лекции "9.4 Jenkins" ## Подготовка к выполнению -1. Создать 2 VM: для jenkins-master и jenkins-agent. -2. Установить jenkins при помощи playbook'a. -3. Запустить и проверить работоспособность. -4. Сделать первоначальную настройку. +> 1. Создать 2 VM: для jenkins-master и jenkins-agent. +> 2. Установить jenkins при помощи playbook'a. +> 3. Запустить и проверить работоспособность. +> 4. Сделать первоначальную настройку. + +Работа выполнялась в **Яндекс.Облаке**, поэтому использовался [Интерфейс командной строки Yandex Cloud](https://cloud.yandex.ru/docs/cli/quickstart) + +Разворачивание ВМ выполнялось провайдером **Яндекс.Облака** для **terraform** с его [зеркала](https://cloud.yandex.ru/docs/tutorials/infrastructure-management/terraform-quickstart#install-terraform). +Готовые файлы инфраструктуры: основной [main.tf](infrastructure/main.tf) и модуль [vm-instance](infrastructure/vm-instance/main.tf) (содержит прямую ссылку на открытый ключ) + + +Разворачивание **Jenkins** и его агента выполняется через **ansible**: [основной playbook](infrastructure/site.yml), [структура нод](infrastructure/cicd/hosts.yml) и [переменные](infrastructure/cicd/group_vars/jenkins.yml) + +Все необходимое для развертки в **bash** скрипте: [go.sh](go.sh) + + +
+Разворачивание инфраструктуры + +```console +┌──(kali㉿kali)-[~/09-ci-04jenkins] +└─$ ./go.sh up + +Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the +following symbols: + + create + +Terraform will perform the following actions: + + # yandex_compute_image.os-centos will be created + + resource "yandex_compute_image" "os-centos" { + + created_at = (known after apply) + + folder_id = (known after apply) + + id = (known after apply) + + min_disk_size = (known after apply) + + name = "os-centos-stream" + + os_type = (known after apply) + + pooled = (known after apply) + + product_ids = (known after apply) + + size = (known after apply) + + source_disk = (known after apply) + + source_family = "centos-stream-8" + + source_image = (known after apply) + + source_snapshot = (known after apply) + + source_url = (known after apply) + + status = (known after apply) + } + + # yandex_vpc_network.my-net will be created + + resource "yandex_vpc_network" "my-net" { + + created_at = (known after apply) + + default_security_group_id = (known after apply) + + folder_id = (known after apply) + + id = (known after apply) + + labels = (known after apply) + + name = "cluster-network" + + subnet_ids = (known after apply) + } + + # yandex_vpc_subnet.my-subnet will be created + + resource "yandex_vpc_subnet" "my-subnet" { + + created_at = (known after apply) + + folder_id = (known after apply) + + id = (known after apply) + + labels = (known after apply) + + name = "cluster-subnet" + + network_id = (known after apply) + + v4_cidr_blocks = [ + + "10.2.0.0/16", + ] + + v6_cidr_blocks = (known after apply) + + zone = "ru-central1-a" + } + + # module.jen-master.yandex_compute_instance.vm-instance will be created + + resource "yandex_compute_instance" "vm-instance" { + + created_at = (known after apply) + + description = "Jenkins Master" + + folder_id = (known after apply) + + fqdn = (known after apply) + + hostname = (known after apply) + + id = (known after apply) + + metadata = { + + "ssh-keys" = <<-EOT + centos:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMdZoMJuvYB/g/JmA+yMFCS6V0XANM7qZOqfT75WmJjA kali@kali + EOT + } + + name = "jenkins-master-01" + + network_acceleration_type = "standard" + + platform_id = "standard-v1" + + service_account_id = (known after apply) + + status = (known after apply) + + zone = (known after apply) + + + boot_disk { + + auto_delete = true + + device_name = "centos" + + disk_id = (known after apply) + + mode = (known after apply) + + + initialize_params { + + block_size = (known after apply) + + description = (known after apply) + + image_id = (known after apply) + + name = (known after apply) + + size = 10 + + snapshot_id = (known after apply) + + type = "network-hdd" + } + } + + + network_interface { + + index = (known after apply) + + ip_address = (known after apply) + + ipv4 = true + + ipv6 = (known after apply) + + ipv6_address = (known after apply) + + mac_address = (known after apply) + + nat = true + + nat_ip_address = (known after apply) + + nat_ip_version = (known after apply) + + security_group_ids = (known after apply) + + subnet_id = (known after apply) + } + + + placement_policy { + + host_affinity_rules = (known after apply) + + placement_group_id = (known after apply) + } + + + resources { + + core_fraction = 5 + + cores = 2 + + memory = 2 + } + + + scheduling_policy { + + preemptible = (known after apply) + } + } + + # module.jen-slave[0].yandex_compute_instance.vm-instance will be created + + resource "yandex_compute_instance" "vm-instance" { + + created_at = (known after apply) + + description = "Jenkins Slave Node 1" + + folder_id = (known after apply) + + fqdn = (known after apply) + + hostname = (known after apply) + + id = (known after apply) + + metadata = { + + "ssh-keys" = <<-EOT + centos:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMdZoMJuvYB/g/JmA+yMFCS6V0XANM7qZOqfT75WmJjA kali@kali + EOT + } + + name = "jenkins-agent-01" + + network_acceleration_type = "standard" + + platform_id = "standard-v1" + + service_account_id = (known after apply) + + status = (known after apply) + + zone = (known after apply) + + + boot_disk { + + auto_delete = true + + device_name = "centos" + + disk_id = (known after apply) + + mode = (known after apply) + + + initialize_params { + + block_size = (known after apply) + + description = (known after apply) + + image_id = (known after apply) + + name = (known after apply) + + size = 10 + + snapshot_id = (known after apply) + + type = "network-hdd" + } + } + + + network_interface { + + index = (known after apply) + + ip_address = (known after apply) + + ipv4 = true + + ipv6 = (known after apply) + + ipv6_address = (known after apply) + + mac_address = (known after apply) + + nat = true + + nat_ip_address = (known after apply) + + nat_ip_version = (known after apply) + + security_group_ids = (known after apply) + + subnet_id = (known after apply) + } + + + placement_policy { + + host_affinity_rules = (known after apply) + + placement_group_id = (known after apply) + } + + + resources { + + core_fraction = 20 + + cores = 2 + + memory = 4 + } + + + scheduling_policy { + + preemptible = (known after apply) + } + } + +Plan: 5 to add, 0 to change, 0 to destroy. + +Changes to Outputs: + + master_ip = (known after apply) + + slave_ip = [ + + (known after apply), + ] +yandex_vpc_network.my-net: Creating... +yandex_compute_image.os-centos: Creating... +yandex_vpc_network.my-net: Creation complete after 2s [id=enpbhup5ssqa6ndcfpqs] +yandex_vpc_subnet.my-subnet: Creating... +yandex_vpc_subnet.my-subnet: Creation complete after 1s [id=e9buecm6olupomr0fgv1] +yandex_compute_image.os-centos: Still creating... [10s elapsed] +yandex_compute_image.os-centos: Creation complete after 11s [id=fd8k3bul4jhiafo87q5t] +module.jen-slave[0].yandex_compute_instance.vm-instance: Creating... +module.jen-master.yandex_compute_instance.vm-instance: Creating... +module.jen-slave[0].yandex_compute_instance.vm-instance: Still creating... [10s elapsed] +module.jen-master.yandex_compute_instance.vm-instance: Still creating... [10s elapsed] +module.jen-slave[0].yandex_compute_instance.vm-instance: Still creating... [20s elapsed] +module.jen-master.yandex_compute_instance.vm-instance: Still creating... [20s elapsed] +module.jen-slave[0].yandex_compute_instance.vm-instance: Still creating... [30s elapsed] +module.jen-master.yandex_compute_instance.vm-instance: Still creating... [30s elapsed] +module.jen-master.yandex_compute_instance.vm-instance: Still creating... [40s elapsed] +module.jen-slave[0].yandex_compute_instance.vm-instance: Still creating... [40s elapsed] +module.jen-master.yandex_compute_instance.vm-instance: Still creating... [50s elapsed] +module.jen-slave[0].yandex_compute_instance.vm-instance: Still creating... [50s elapsed] +module.jen-master.yandex_compute_instance.vm-instance: Still creating... [1m0s elapsed] +module.jen-slave[0].yandex_compute_instance.vm-instance: Still creating... [1m0s elapsed] +module.jen-slave[0].yandex_compute_instance.vm-instance: Creation complete after 1m1s [id=fhml6hg70ruicnjjpegm] +module.jen-master.yandex_compute_instance.vm-instance: Creation complete after 1m1s [id=fhm90et921n53rnjgpks] + +Apply complete! Resources: 5 added, 0 changed, 0 destroyed. + +Outputs: + +master_ip = "51.250.17.146" +slave_ip = [ + "51.250.3.1", +] +┌──(kali㉿kali)-[~/09-ci-04jenkins] +└─$ ./go.sh deploy + +PLAY [Generate dynamic inventory] ************************************************************************************** + +TASK [Get instances from Yandex.Cloud CLI] ***************************************************************************** +ok: [localhost] + +TASK [Set instances to facts] ****************************************************************************************** +ok: [localhost] + +TASK [Add instances IP to hosts] *************************************************************************************** +ok: [localhost] => (item={'id': 'fhm90et921n53rnjgpks', 'folder_id': 'b1g3ol70h1opu6hr9kie', 'created_at': '2023-10-31T11:22:41Z', 'name': 'jenkins-master-01', 'description': 'Jenkins Master', 'zone_id': 'ru-central1-a', 'platform_id': 'standard-v1', 'resources': {'memory': '2147483648', 'cores': '2', 'core_fraction': '5'}, 'status': 'RUNNING', 'metadata_options': {'gce_http_endpoint': 'ENABLED', 'aws_v1_http_endpoint': 'ENABLED', 'gce_http_token': 'ENABLED', 'aws_v1_http_token': 'ENABLED'}, 'boot_disk': {'mode': 'READ_WRITE', 'device_name': 'centos', 'auto_delete': True, 'disk_id': 'fhm90knlb6e2d1hiqi2a'}, 'network_interfaces': [{'index': '0', 'mac_address': 'd0:0d:90:3b:a9:10', 'subnet_id': 'e9buecm6olupomr0fgv1', 'primary_v4_address': {'address': '10.2.0.26', 'one_to_one_nat': {'address': '51.250.17.146', 'ip_version': 'IPV4'}}}], 'fqdn': 'fhm90et921n53rnjgpks.auto.internal', 'scheduling_policy': {}, 'network_settings': {'type': 'STANDARD'}, 'placement_policy': {}}) +ok: [localhost] => (item={'id': 'fhml6hg70ruicnjjpegm', 'folder_id': 'b1g3ol70h1opu6hr9kie', 'created_at': '2023-10-31T11:22:41Z', 'name': 'jenkins-agent-01', 'description': 'Jenkins Slave Node 1', 'zone_id': 'ru-central1-a', 'platform_id': 'standard-v1', 'resources': {'memory': '4294967296', 'cores': '2', 'core_fraction': '20'}, 'status': 'RUNNING', 'metadata_options': {'gce_http_endpoint': 'ENABLED', 'aws_v1_http_endpoint': 'ENABLED', 'gce_http_token': 'ENABLED', 'aws_v1_http_token': 'ENABLED'}, 'boot_disk': {'mode': 'READ_WRITE', 'device_name': 'centos', 'auto_delete': True, 'disk_id': 'fhmiubt6uodfipkp90v9'}, 'network_interfaces': [{'index': '0', 'mac_address': 'd0:0d:15:34:60:70', 'subnet_id': 'e9buecm6olupomr0fgv1', 'primary_v4_address': {'address': '10.2.0.10', 'one_to_one_nat': {'address': '51.250.3.1', 'ip_version': 'IPV4'}}}], 'fqdn': 'fhml6hg70ruicnjjpegm.auto.internal', 'scheduling_policy': {}, 'network_settings': {'type': 'STANDARD'}, 'placement_policy': {}}) + +TASK [Check instance count] ******************************************************************************************** +ok: [localhost] => { + "msg": "Total instance count: 2" +} + +PLAY [Approve SSH fingerprint] ***************************************************************************************** + +TASK [Check known_hosts for] ******************************************************************************************* +ok: [jenkins-master-01 -> localhost] +ok: [jenkins-agent-01 -> localhost] + +TASK [Skip question for adding host key] ******************************************************************************* +ok: [jenkins-master-01] +ok: [jenkins-agent-01] + +TASK [Add SSH fingerprint to known host] ******************************************************************************* +ok: [jenkins-agent-01] +ok: [jenkins-master-01] + +PLAY [Preapre all hosts] *********************************************************************************************** + +TASK [Gathering Facts] ************************************************************************************************* +ok: [jenkins-master-01] +ok: [jenkins-agent-01] + +TASK [Create group] **************************************************************************************************** +changed: [jenkins-agent-01] +changed: [jenkins-master-01] + +TASK [Create user] ***************************************************************************************************** +changed: [jenkins-agent-01] +changed: [jenkins-master-01] + +TASK [Echo Public Key] ************************************************************************************************* +ok: [jenkins-master-01] => { + "ssh_key_info['ssh_public_key']": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQChaQp2Upzkg7QDUdk6rtj3fLTGJcegS13JD74id1MGxRzZeZJFLuDowP31kKCkorgff8lqGsNOq5zczHiMHWMAX7TbQaoNFrYv57nAY/FEl8D/FNw8ZURV/b/V8WZW69LzSz74WNzwYMzWHoWEFfL7OLQydR8jCAZcYWGyaiWf+3KlUIIwnjYGUMyIHJ4QRlZeS4WJBrrGQiWnuG51UnD6AYGFHihzJiPoF1jxbFfQz5Wf+LzWzaD8dPlswoUXz45p+0ZLRN4h+JhMAebyao0D40ukzqPehGEsrkyBsUMhI9BqoYgU/I36oGTqKFNjEiDVV3kTd5tfwSVVe92Grm6FvUmVnHbbzBuiojODGj4sIKvYqyOsC87W04GUr54qUzS4ugizhcAoO0iIa9kjmv5pvPOSoFRX17ab2cBZbW/2Ta13TKpRHVyYs+LeksuoAzPb6cgHyeCFLQZEVrZ59sqrARVBlCtrpKFRovkFZe9mKFm/YMyByJfahN+E54Etxt0= ansible-generated on fhm90et921n53rnjgpks.auto.internal" +} +ok: [jenkins-agent-01] => { + "ssh_key_info['ssh_public_key']": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCe6mPN/totoYhT2xHuRqSUb++lykkgS2W8k3SuYbC99G6DD/j9kNSJ4lCT0JEjMJj53XegRvHzGlQTxjEl0xJBFS+HP03n7uDvMaEuA6B7jG3LjzwPhvasbtRvv39bZMIHmOG3kufH2zu7SAnKg3Reafzf+ZwF7eqzWxoGYHlt3wZf42hw9zH6vdX2eWczmMNnOTcNiJqJ4K4iX3xStbpL84OJsC69xJcDQ8m70CPvOKtrMxRTbHuiKU9oDV4oWHB+sO8y+Pf97SsjWGGX1UUwir00jFG4YFSTR97ZxgA2U2fJyzrnYtSwIGaO/LaJCFTb7QrgrRVYY2rX/FJxqa6Cv+Wca7EVvGIqMbcKNE27NA83H54sS/yWqmgYs6ex9aKLc5GEjmRVDs4ebPRUTHlYMm4gAIuBjAi0KqKDiJirp0gQbt/Y2nGJMxeV4i5/DbAi07vR6qY3vVfTk9Qwlk3mxjTixwsOtUufp2CO7IAGR01j6lbqChDeuNtZw2Rw/FU= ansible-generated on fhml6hg70ruicnjjpegm.auto.internal" +} + +TASK [Install JDK] ***************************************************************************************************** +changed: [jenkins-master-01] +changed: [jenkins-agent-01] + +TASK [Ensure GitHub are present in known_hosts file] ******************************************************************* +# github.com:22 SSH-2.0-babeld-ea310e90 +# github.com:22 SSH-2.0-babeld-ea310e90 +# github.com:22 SSH-2.0-babeld-ea310e90 +# github.com:22 SSH-2.0-babeld-ea310e90 +# github.com:22 SSH-2.0-babeld-ea310e90 +# github.com:22 SSH-2.0-babeld-ea310e90 +# github.com:22 SSH-2.0-babeld-ea310e90 +# github.com:22 SSH-2.0-babeld-ea310e90 +# github.com:22 SSH-2.0-babeld-ea310e90 +# github.com:22 SSH-2.0-babeld-ea310e90 +[WARNING]: Module remote_tmp /home/jenkins/.ansible/tmp did not exist and was created with a mode of 0700, this may +cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions +manually +changed: [jenkins-master-01] +changed: [jenkins-agent-01] + +PLAY [Get Jenkins master installed] ************************************************************************************ + +TASK [Gathering Facts] ************************************************************************************************* +ok: [jenkins-master-01] + +TASK [Get repo Jenkins] ************************************************************************************************ +changed: [jenkins-master-01] + +TASK [Add Jenkins key] ************************************************************************************************* +changed: [jenkins-master-01] + +TASK [Install epel-release] ******************************************************************************************** +changed: [jenkins-master-01] + +TASK [Install Jenkins and requirements] ******************************************************************************** +changed: [jenkins-master-01] + +TASK [Ensure jenkins agents are present in known_hosts file] *********************************************************** +# 51.250.3.1:22 SSH-2.0-OpenSSH_8.0 +# 51.250.3.1:22 SSH-2.0-OpenSSH_8.0 +# 51.250.3.1:22 SSH-2.0-OpenSSH_8.0 +# 51.250.3.1:22 SSH-2.0-OpenSSH_8.0 +# 51.250.3.1:22 SSH-2.0-OpenSSH_8.0 +changed: [jenkins-master-01] => (item=jenkins-agent-01) + +TASK [Start Jenkins] *************************************************************************************************** +changed: [jenkins-master-01] + +TASK [Get initial password] ******************************************************************************************** +ok: [jenkins-master-01] + +TASK [Echo initial password] ******************************************************************************************* +ok: [jenkins-master-01] => { + "msg": "Use it to complete initialization: 1778e36f5a514defa49f414865bec9b5" +} + +TASK [Commands to get private key] ************************************************************************************* +ok: [jenkins-master-01] => { + "msg": "ssh centos@51.250.17.146 sudo cat /home/jenkins/.ssh/id_rsa" +} + +PLAY [Prepare jenkins agent] ******************************************************************************************* + +TASK [Gathering Facts] ************************************************************************************************* +ok: [jenkins-agent-01] + +TASK [Add master publickey into authorized_key] ************************************************************************ +changed: [jenkins-agent-01] + +TASK [Create agent_dir] ************************************************************************************************ +changed: [jenkins-agent-01] + +TASK [Add docker repo] ************************************************************************************************* +changed: [jenkins-agent-01] + +TASK [Install some required] ******************************************************************************************* +changed: [jenkins-agent-01] + +TASK [Update pip] ****************************************************************************************************** +changed: [jenkins-agent-01] + +TASK [Install Ansible 2.13.5] ****************************************************************************************** +changed: [jenkins-agent-01] + +TASK [Install Molecule] ************************************************************************************************ +changed: [jenkins-agent-01] + +TASK [Install Python 3.6] ********************************************************************************************** +changed: [jenkins-agent-01] + +TASK [Add local to PATH] *********************************************************************************************** +changed: [jenkins-agent-01] + +TASK [Create docker group] ********************************************************************************************* +ok: [jenkins-agent-01] + +TASK [Add jenkinsuser to dockergroup] ********************************************************************************** +changed: [jenkins-agent-01] + +TASK [Restart docker] ************************************************************************************************** +changed: [jenkins-agent-01] + +TASK [Install agent.jar] *********************************************************************************************** +changed: [jenkins-agent-01] + +TASK [Agent configuration] ********************************************************************************************* +ok: [jenkins-agent-01] => { + "msg": "To configure agent node use: ssh 51.250.3.1 java -jar /opt/jenkins_agent/agent.jar" +} + +PLAY RECAP ************************************************************************************************************* +jenkins-agent-01 : ok=24 changed=16 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 +jenkins-master-01 : ok=19 changed=10 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 +localhost : ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 + ++----------------------+-------------------+---------------+---------+-----------------+-------------+ +| ID | NAME | ZONE ID | STATUS | EXTERNAL IP | INTERNAL IP | ++----------------------+-------------------+---------------+---------+-----------------+-------------+ +| fhm90et921n53rnjgpks | jenkins-master-01 | ru-central1-a | RUNNING | 51.250.17.146 | 10.2.0.26 | +| fhml6hg70ruicnjjpegm | jenkins-agent-01 | ru-central1-a | RUNNING | 51.250.3.1 | 10.2.0.10 | ++----------------------+-------------------+---------------+---------+-----------------+-------------+ + +┌──(kali㉿kali)-[~/09-ci-04jenkins] +└─$ +``` + +
+ +Для решения этой задачи в качестве экспереминта решил собрать стек коллекции под названием **JeMoVeClLi**. + В ней используются роли предыдущих задач (**Clickhouse** + **Vector** + **Lighthouse**),а так же добавлены скрипты для **Jenkins** и тестирование посредством **Molecule**. + + +--- ## Основная часть -1. Сделать Freestyle Job, который будет запускать `molecule test` из любого вашего репозитория с ролью. -2. Сделать Declarative Pipeline Job, который будет запускать `molecule test` из любого вашего репозитория с ролью. -3. Перенести Declarative Pipeline в репозиторий в файл `Jenkinsfile`. -4. Создать Multibranch Pipeline на запуск `Jenkinsfile` из репозитория. -5. Создать Scripted Pipeline, наполнить его скриптом из [pipeline](./pipeline). -6. Внести необходимые изменения, чтобы Pipeline запускал `ansible-playbook` без флагов `--check --diff`, если не установлен параметр при запуске джобы (prod_run = True), по умолчанию параметр имеет значение False и запускает прогон с флагами `--check --diff`. -7. Проверить работоспособность, исправить ошибки, исправленный Pipeline вложить в репозиторий в файл `ScriptedJenkinsfile`. -8. Отправить ссылку на репозиторий с ролью и Declarative Pipeline и Scripted Pipeline. +### 1. Сделать Freestyle Job, который будет запускать `molecule test` из любого вашего репозитория с ролью. + +![freestyle](img/freestyle.png) + +--- + +### 2. Сделать Declarative Pipeline Job, который будет запускать `molecule test` из любого вашего репозитория с ролью. + +Готовый скрипт для **pipeline** + +```jenkins +pipeline { + agent any + + stages { + stage('Get CVL roles') { + steps { + git branch: 'main', credentialsId: 'sa-github', url: 'git@github.com:NamorNinayzuk/JeMoVeClLi.git' + } + } + stage('Molecute Test') { + steps { + sh 'molecule test' + } + } + } +} +``` + +![declarative](img/declarative.png) + +--- + +### 3. Перенести Declarative Pipeline в репозиторий в файл `Jenkinsfile`. + +~~Так как имя файла `Jenkinsfile` будет использоваться в следующей задаче, декларативный **pipeline** сохранён в файл с именем `DeclarativeJenkinsfile`~~ + +--- + +### 4. Создать Multibranch Pipeline на запуск `Jenkinsfile` из репозитория. + +> Использовать `Jenkinsfile` из предыдущего задания считаю не корректным, ибо в нём явно прописано извлечение конкретной ветки, что делает его содержимое избыточным. +> **Multibranch** уже извлекает всё содержимое нужной ветки. +> И если этот файл в обязательном порядке не менять в каждой ветке, не до конца понятно что будет собираться: извлечённая ветка от **multibranch** или та, которую извлёк скрипт. +> Данное поведение, я, к сожалению, проверить не догадался. +> В данном решении файлы `Jenkinsfile` разные из-за наличия в них маркеров (первый шаг) - без них файл можно было бы сделать единым для всех веток. + +Для проверки решения в имеющемся репозиторий **JeMoVeClLi** создан файл `Jenkinsfile` со следующим содержимым: + +```jenkins +pipeline { + agent any + + stages { + stage('Main branch') { + steps { + echo 'MAIN' + } + } + stage('Molecute Test') { + steps { + sh 'molecule test' + } + } + } +} +``` +> **Pipeline** состоит из двух этапов, где первый является своеобразным маркером ветки (выводит текст `MAIN` и имеет соответствующее название), а второй запускает тестирование **Molecule** + +В **playbook** этапа **Molecule converge** добавлен **play** со следующим содержимым: + +```yaml +- name: Project name + hosts: localhost + gather_facts: false + tasks: + - name: Project branch + ansible.builtin.debug: + msg: "It is MAIN branch" +``` +> Также служит "маркером" исполняемой ветки + +Далее от основной ветки `main` создано ответвление `second`, где перечисленные выше файлы изменены следующим образом: + +Файл `Jenkinsfile` + +```jenkins +pipeline { + agent any + stages { + stage('Second branch') { + steps { + echo 'SECOND' + } + } + stage('Molecute Test') { + steps { + sh 'molecule test' + } + } + } +} +``` + +Изменённый **play** файла `molecule/default/converge.yml`: + +```yaml +- name: Project name + hosts: localhost + gather_facts: false + tasks: + - name: Project branch + ansible.builtin.debug: + msg: "It is SECOND branch" +``` + +При сохранении конфигурации проекта выполняется сканирование репозитория в соответствии с настройками проекта для поиска нужных ссылок (веток, тегов и т.д.) - в данном случае ищутся только ветки. +Лог обработки выглядит подобным образом: + +```console +Started +[Wed May 31 15:15:43 UTC 2023] Starting branch indexing... + > git --version # timeout=10 + > git --version # 'git version 2.31.1' +using GIT_SSH to set credentials +[INFO] Currently running in a labeled security context +[INFO] Currently SELinux is 'enforcing' on the host + > /usr/bin/chcon --type=ssh_home_t /tmp/jenkins-gitclient-ssh9349166857291373915.key +Verifying host key using known hosts file + > git ls-remote --symref -- git@github.com:NamorNinayzuk/JeMoVeClLi.git # timeout=10 +Creating git repository in /var/lib/jenkins/caches/git-f69dbc5e9c49bafa75a1f1623749dd2b + > git init /var/lib/jenkins/caches/git-f69dbc5e9c49bafa75a1f1623749dd2b # timeout=10 +Setting origin to git@github.com:NamorNinayzuk/JeMoVeClLi.git + > git config remote.origin.url git@github.com:NamorNinayzuk/JeMoVeClLi.git # timeout=10 +Fetching & pruning origin... +Listing remote references... + > git config --get remote.origin.url # timeout=10 + > git --version # timeout=10 + > git --version # 'git version 2.31.1' +using GIT_SSH to set credentials +[INFO] Currently running in a labeled security context +[INFO] Currently SELinux is 'enforcing' on the host + > /usr/bin/chcon --type=ssh_home_t /var/lib/jenkins/caches/git-f69dbc5e9c49bafa75a1f1623749dd2b@tmp/jenkins-gitclient-ssh4565873836439757084.key +Verifying host key using known hosts file + > git ls-remote -h -- git@github.com:NamorNinayzuk/JeMoVeClLi.git # timeout=10 +Fetching upstream changes from origin + > git config --get remote.origin.url # timeout=10 +using GIT_SSH to set credentials +[INFO] Currently running in a labeled security context +[INFO] Currently SELinux is 'enforcing' on the host + > /usr/bin/chcon --type=ssh_home_t /var/lib/jenkins/caches/git-f69dbc5e9c49bafa75a1f1623749dd2b@tmp/jenkins-gitclient-ssh16470572433904368790.key +Verifying host key using known hosts file + > git fetch --tags --force --progress --prune -- origin +refs/heads/*:refs/remotes/origin/* # timeout=10 +Checking branches... + Checking branch main + ‘Jenkinsfile’ found + Met criteria +Scheduled build for branch: main + Checking branch second + ‘Jenkinsfile’ found + Met criteria +Scheduled build for branch: second +Processed 2 branches +[Wed May 31 15:15:52 UTC 2023] Finished branch indexing. Indexing took 8.6 sec +Finished: SUCCESS +``` + + + +![multi](img/multi.png) + + +--- + +### 5. Создать Scripted Pipeline, наполнить его скриптом из [pipeline](./pipeline). + +В исходном скрипте: +```jenkins +node("linux"){ + stage("Git checkout"){ + git credentialsId: '5ac0095d-0185-431b-94da-09a0ad9b0e2c', url: 'git@github.com:aragastmatb/example-playbook.git' + } + stage("Sample define secret_check"){ + secret_check=true + } + stage("Run playbook"){ + if (secret_check){ + sh 'ansible-playbook site.yml -i inventory/prod.yml' + } + else{ + echo 'need more action' + } + + } +} +``` + +единственное, что нужно поменять - это `credentialsId` - сохранённый в секретах **credentals** доступа к **GitHub** (скорее всего **private** ключ) + +Так как я раньше уже сохранял свой аналогичный **credentals**, то можно использовать его: +```jenkins +node("linux"){ + stage("Git checkout"){ + git credentialsId: 'sa-github', url: 'git@github.com:aragastmatb/example-playbook.git' + } + stage("Sample define secret_check"){ + secret_check=true + } + stage("Run playbook"){ + if (secret_check){ + sh 'ansible-playbook site.yml -i inventory/prod.yml' + } + else{ + echo 'need more action' + } + + } +} +``` + +Однако, в скрипте указано использование агента с меткой `linux`. +Поэтому для того, чтобы запланированная сборка могла быть отправлена агенту на выполнение нужно: + - либо убрать требование метки - заменить строчку `node("linux"){` на `node{` + - либо добавить соответствующую метку ноде агента + +Для решения выбран вариант добавления меток `linux ansible molecule` + +--- + +### 6. Внести необходимые изменения, чтобы Pipeline запускал `ansible-playbook` без флагов `--check --diff`, если не установлен параметр при запуске джобы (prod_run = True), по умолчанию параметр имеет значение False и запускает прогон с флагами `--check --diff`. + +Условия попроще: + - Добавить параметр сборки `prod_run` + - Если параметр не установлен (что эквивалентно значению `false`) запускать `ansible-playbook` с флагами `--check --diff` + - Иначе (если параметр имеет значение true) запускать `ansible-playbook` без флагов + +Готовый скрипт: +```jenkins +node("linux"){ + stage("Git checkout"){ + git credentialsId: 'sa-github', url: 'git@github.com:aragastmatb/example-playbook.git' + } + stage("Run playbook"){ + if (params.prod_line == true) { + sh 'ansible-playbook site.yml -i inventory/prod.yml' + } + else + { + sh 'ansible-playbook site.yml -i inventory/prod.yml --check --diff' + } + + } +} +``` + +--- + +### 7. Проверить работоспособность, исправить ошибки, исправленный Pipeline вложить в репозиторий в файл `ScriptedJenkinsfile`. + +При исполнени **playbook** появляется ошибка: +![scripted-error](img/scripted-error.png) + +```console +TASK [java : Ensure installation dir exists] *********************************** +fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "sudo: a password is required\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} +``` + +Из вообщения видно, что **ansible** пытается выполнить команду с повышенными привелегиями (**sudo**), но соответствующий пароль не задан. +Проблему можно решить разными способами - подробнее в статье [Ansible – “sudo: a password is required”](https://www.shellhacks.com/ansible-sudo-a-password-is-required/) + +Я выбрал вариант попроще, а именно - не спрашивать пароль для `sudo` операций пользователя `jenkins` для чего нужно: + - Подключиться к ноде агента: `ssh centos@51.250.3.1` + - Выполнить команду `sudo visudo` + - Добавить в конец файла строку `jenkins ALL=(ALL) NOPASSWD:ALL` + + +--- + +### 8. Отправить ссылку на репозиторий с ролью и Declarative Pipeline и Scripted Pipeline. + +Репозиторий решения: [JeMoVeClLi](https://github.com/NamorNinayzuk/JeMoVeClLi) - содержит: + - Сценарий проверки ролей стека **Clickhouse+Vector+Lighthouse** + - Две ветки для **Multibranch pipeline**: `main` и `second` - используется файл **Jenkinsfile** + - Основная ветка **main** содержит Файл `DeclarativeJenkinsfile` для **Declarative pipeline** + - Основная ветка **main** содержит Файл `ScriptedJenkinsfile` для **Scripted pipeline** (включает отладочный шаг `id`) + - Основная ветка **main** содержит файл `ScriptedCVLstackJenkinsfile` из второй дополнительной задачи + +--- ## Необязательная часть -1. Создать скрипт на groovy, который будет собирать все Job, которые завершились хотя бы раз неуспешно. Добавить скрипт в репозиторий с решением с названием `AllJobFailure.groovy`. -2. Создать Scripted Pipeline таким образом, чтобы он мог сначала запустить через Ya.Cloud CLI необходимое количество инстансов, прописать их в инвентори плейбука и после этого запускать плейбук. Тем самым, мы должны по нажатию кнопки получить готовую к использованию систему. +1. Создать скрипт на groovy, который будет собирать все Job, завершившиеся хотя бы раз неуспешно. Добавить скрипт в репозиторий с решением и названием `AllJobFailure.groovy`. +2. Создать Scripted Pipeline так, чтобы он мог сначала запустить через Yandex Cloud CLI необходимое количество инстансов, прописать их в инвентори плейбука и после этого запускать плейбук. Мы должны при нажатии кнопки получить готовую к использованию систему. --- -### Как оформить ДЗ? +### Как оформить решение задания -Выполненное домашнее задание пришлите ссылкой на .md-файл в вашем репозитории. +Выполненное домашнее задание пришлите в виде ссылки на .md-файл в вашем репозитории. --- diff --git a/09-ci-04-jenkins/go.sh b/09-ci-04-jenkins/go.sh new file mode 100644 index 000000000..c18892d6b --- /dev/null +++ b/09-ci-04-jenkins/go.sh @@ -0,0 +1,58 @@ +#!/usr/bin/env bash + +workdir=./infrastructure + +if ! [ -x "$(command -v terraform)" ]; then + echo 'Terraform is not installed. Visit: https://hashicorp-releases.yandexcloud.net/terraform/' >&2 + exit 1 +fi + +if ! [ -x "$(command -v yc)" ]; then + echo 'Yandex CLI is not installed. Visit: https://cloud.yandex.ru/docs/cli/quickstart' >&2 + exit 2 +fi + +if [ ! -d $workdir ]; then + echo "Infrastructure files not exists" + exit 3 +fi + +export TF_VAR_YC_TOKEN=$(yc config get token) +export TF_VAR_YC_CLOUD_ID=$(yc config get cloud-id) +export TF_VAR_YC_FOLDER_ID=$(yc config get folder-id) +export TF_VAR_YC_ZONE=$(yc config get compute-default-zone) + +init() { + terraform init +} + +up() { + terraform apply --auto-approve +} + +down() { + terraform destroy --auto-approve +} + +clear() { + rm -rf .terraform* + rm terraform.tfstate* +} + +deploy() { + ansible-playbook -i cicd site.yml + yc compute instance list +} + +cd $workdir + +if [ $1 ]; then + $1 +else + echo "Possible commands:" + echo " init - Init terraform provider" + echo " up - Deplay cloud resources" + echo " down - Destroy all cloud resources" + echo " clear - Clear temporary files" + echo " deploy - Deploy Jenkins nodes" +fi diff --git a/09-ci-04-jenkins/img/declarative.png b/09-ci-04-jenkins/img/declarative.png new file mode 100644 index 000000000..ffe824192 Binary files /dev/null and b/09-ci-04-jenkins/img/declarative.png differ diff --git a/09-ci-04-jenkins/img/freestyle.png b/09-ci-04-jenkins/img/freestyle.png new file mode 100644 index 000000000..63850d02f Binary files /dev/null and b/09-ci-04-jenkins/img/freestyle.png differ diff --git a/09-ci-04-jenkins/img/multi.png b/09-ci-04-jenkins/img/multi.png new file mode 100644 index 000000000..90b7533b7 Binary files /dev/null and b/09-ci-04-jenkins/img/multi.png differ diff --git a/09-ci-04-jenkins/img/readmi.txt b/09-ci-04-jenkins/img/readmi.txt new file mode 100644 index 000000000..8b1378917 --- /dev/null +++ b/09-ci-04-jenkins/img/readmi.txt @@ -0,0 +1 @@ + diff --git a/09-ci-04-jenkins/img/scripted-error.png b/09-ci-04-jenkins/img/scripted-error.png new file mode 100644 index 000000000..1d9948f38 Binary files /dev/null and b/09-ci-04-jenkins/img/scripted-error.png differ diff --git a/09-ci-04-jenkins/infrastructure/cicd /group_vars /jenkins.yml b/09-ci-04-jenkins/infrastructure/cicd /group_vars /jenkins.yml new file mode 100644 index 000000000..b2c2570d7 --- /dev/null +++ b/09-ci-04-jenkins/infrastructure/cicd /group_vars /jenkins.yml @@ -0,0 +1,9 @@ +--- +user_group: "{{ jenkins_user_group }}" +user_name: "{{ jenkins_user_name }}" +jenkins_user_name: jenkins +jenkins_user_group: jenkins +java_packages: + - java-11-openjdk-devel + - java-11-openjdk +jenkins_agent_dir: /opt/jenkins_agent/ diff --git a/09-ci-04-jenkins/infrastructure/cicd /hosts.yml b/09-ci-04-jenkins/infrastructure/cicd /hosts.yml new file mode 100644 index 000000000..d2b8f52c3 --- /dev/null +++ b/09-ci-04-jenkins/infrastructure/cicd /hosts.yml @@ -0,0 +1,17 @@ +--- +all: + hosts: + jenkins-master-01: + jenkins-agent-01: + children: + jenkins: + children: + jenkins_masters: + hosts: + jenkins-master-01: + jenkins_agents: + hosts: + jenkins-agent-01: + vars: + ansible_connection_type: paramiko + ansible_user: centos diff --git a/09-ci-04-jenkins/infrastructure/vm-instance /main.tf b/09-ci-04-jenkins/infrastructure/vm-instance /main.tf new file mode 100644 index 000000000..95dcf8251 --- /dev/null +++ b/09-ci-04-jenkins/infrastructure/vm-instance /main.tf @@ -0,0 +1,53 @@ +variable name { default = "" } +variable description { default = "" } +variable platform { default = "standard-v1" } +variable cpu { default = "" } +variable ram { default = "" } +variable cpu_load { default = 5 } +variable main_disk_image { default = "" } +variable main_disk_size { default = "" } +variable subnet { default = "" } +variable user { default = "" } + +terraform { + required_providers { + yandex = { + source = "yandex-cloud/yandex" + } + } + required_version = ">= 0.13" +} + +resource "yandex_compute_instance" "vm-instance" { + name = var.name + description = var.description + platform_id = var.platform + + resources { + cores = var.cpu + memory = var.ram + core_fraction = var.cpu_load + } + + boot_disk { + device_name = var.user + initialize_params { + image_id = var.main_disk_image + type = "network-hdd" + size = var.main_disk_size + } + } + + network_interface { + subnet_id = var.subnet + nat = true + } + + metadata = { + ssh-keys = "${var.user}:${file("~/.ssh/id_ed25519.pub")}" + } +} + +output "external_ip" { + value = "${yandex_compute_instance.vm-instance.network_interface.0.nat_ip_address}" +} diff --git a/10-monitoring-01-base/README.md b/10-monitoring-01-base/README.md index 2fbbc1971..c87259a10 100644 --- a/10-monitoring-01-base/README.md +++ b/10-monitoring-01-base/README.md @@ -1,23 +1,93 @@ -# Домашнее задание к занятию "13.Введение в мониторинг" +# Домашнее задание по лекции "10.1 Зачем и что нужно мониторить" -## Обязательные задания +## Обязательная задача 1 -1. Вас пригласили настроить мониторинг на проект. На онбординге вам рассказали, что проект представляет из себя -платформу для вычислений с выдачей текстовых отчетов, которые сохраняются на диск. Взаимодействие с платформой -осуществляется по протоколу http. Также вам отметили, что вычисления загружают ЦПУ. Какой минимальный набор метрик вы -выведите в мониторинг и почему? +> Вас пригласили настроить мониторинг на проект. На онбординге вам рассказали, что проект представляет из себя +> платформу для вычислений с выдачей текстовых отчетов, которые сохраняются на диск. Взаимодействие с платформой +> осуществляется по протоколу http. Также вам отметили, что вычисления загружают ЦПУ. Какой минимальный набор метрик вы +> выведите в мониторинг и почему? -2. Менеджер продукта посмотрев на ваши метрики сказал, что ему непонятно что такое RAM/inodes/CPUla. Также он сказал, -что хочет понимать, насколько мы выполняем свои обязанности перед клиентами и какое качество обслуживания. Что вы -можете ему предложить? +Одна из основных задач мониторинга - контроль важных для сервиса/проекта/бизнеса параметров. +В данном примере для проекта явно указана нагрузка на ЦПУ и хранение результатов вычислений +в виде текстовых отчётов на диске. -3. Вашей DevOps команде в этом году не выделили финансирование на построение системы сбора логов. Разработчики в свою -очередь хотят видеть все ошибки, которые выдают их приложения. Какое решение вы можете предпринять в этой ситуации, -чтобы разработчики получали ошибки приложения? +Поэтому как минимум нужно выводить уровень загрузки ЦПУ вычислительных нод, а также доступную ёмкость нод, предназначенных для храения отчётов. +Уровень загрузки ЦПУ и доступная ёмкость дисков поможет определить уровень достаточности ресурсов для проекта в данный момент +(а может быть у нас оборудование простаивает и можно часть сократить если это облачные ресурсы или перевести на другие проекты если "железо" своё), +а также позволит спрогнозировать время модернизации. +Вместе с этим, также можно вывести другие параметры систем хранения данных, например +длинну очереди на запись - возможно текущие накопители не справляются и нужно их масштабировать +как в горизонталь, наращивая число накопителей в RAID массивах, так и в вертикаль - заменяя HDD на SSD, а может быть стоит реализовать гибридный вариант. -3. Вы, как опытный SRE, сделали мониторинг, куда вывели отображения выполнения SLA=99% по http кодам ответов. -Вычисляете этот параметр по следующей формуле: summ_2xx_requests/summ_all_requests. Данный параметр не поднимается выше -70%, но при этом в вашей системе нет кодов ответа 5xx и 4xx. Где у вас ошибка? +Дополнительно: +Если в проекте вычислительные ноды отделены от нод хранения и/или они все отделены от точки +входа в систему (web сервер), то можно мониторить объём генерируемого трафика, задержи или +длительность отправки данных от и до вычислительных нод, точки входа, ноды хранения - возможно, +на каком-то участке есть узкое место, тормозящее всю систему. + +--- + +## Обязательная задача 2 + +> Менеджер продукта посмотрев на ваши метрики сказал, что ему непонятно что такое RAM/inodes/CPUla. Также он сказал, +> что хочет понимать, насколько мы выполняем свои обязанности перед клиентами и какое качество обслуживания. Что вы +> можете ему предложить? + +Для оценки качества обслуживания можно настроить мониторинг доступности сервиса, +качество ответов точки доступа и длительность обработки заданий на вычисления. + +Качество ответов точки доступа можно определить по http кодам ответа web-сервера. +Чем выше процентиль успешных кодов, тем лучше. Появление кодов ошибок (группы 4xx и 5xx), +будет говорить о каких-то недоработках в сервисе. + +Доступность сервиса можно определять косвенно - по непрерывности прихода метрик нагрузки серверов. +Например, когда нагрузка на ЦПУ снижается до нуля или становится атипично низкой, то возможно +задачи на вычисления не поступают из-за проблем с точкой доступа или в каналах связи. +Доступность можно определять и напрямую, реализовав самопроверку сервиса. +Например, переодическим запуском скрипта проверки сервисов, который будет фиксировать и передавать в систему мониторинга состояние сервиса. + +Длительность выполнения задач на вычисления фиксировать временем запуска задачи на вычислительной ноде и временем фиксации отчёта в нодах хранения. +Либо централизованно в точке доступа при старте и завершении обработки задачи. На dashboad выводить гистограмму распределения длительности выполнения задач. + +Для каждой метрики (группы метрик) нужно ввести определённые значения или диапазоны значений, в рамках которых наш проект должен находиться или не превышать/уменьшать их. +Соответствие текущих показателей целевым и будет оценкой качества выполнения наших обязательств перед клиентами. + +--- + +## Обязательная задача 3 + +> Вашей DevOps команде в этом году не выделили финансирование на построение системы сбора логов. Разработчики в свою +> очередь хотят видеть все ошибки, которые выдают их приложения. Какое решение вы можете предпринять в этой ситуации, +> чтобы разработчики получали ошибки приложения? + +Если команде не выделили ресурсов (ни финансовых, ни временных) на построение системы сбора логов, то +нужно воспользоваться имеющимися на рынке облачными системами, которые допускают бесплатное использовние. +Подобными продуктами могут быть: + - [Sentry](https://sentry.io/welcome/) + - [Yandex Cloud Logging](https://cloud.yandex.ru/services/logging) + - [New Relic](https://newrelic.com/) + - [Bugsnag](https://www.bugsnag.com/) + - [Rollbar](https://rollbar.com/) + - [Instabug](https://www.instabug.com/) + +--- + +## Обязательная задача 4 + +> Вы, как опытный SRE, сделали мониторинг, куда вывели отображения выполнения SLA=99% по http кодам ответов. +> Вычисляете этот параметр по следующей формуле: summ_2xx_requests/summ_all_requests. Данный параметр не поднимается выше +> 70%, но при этом в вашей системе нет кодов ответа 5xx и 4xx. Где у вас ошибка? + +Так как мы рассчитываем выполнения **SLA** по кодам **http** ответов, то в расчётной формуле +для **SLI** оно должно быть частным от деления суммы запросов с http кодами успешных +ответов как делимого к делителю общего число запросов, а именно: +``` +SLI = (запросы_группы_2xx + запросы_группы_3xx) / сумма_всех_запросов +``` +Из всех [имеющихся групп](https://developer.mozilla.org/ru/docs/Web/HTTP/Status) запросов **4xx** и **5xx** исключаются как индикаторы ошибок. +Коды группы **1xx** носят информационный характер и по сути являются промежуточными кодами сервера перед финальным ответом на запрос, поэтому их также можно исключить. + +--- ## Дополнительное задание (со звездочкой*) - необязательно к выполнению diff --git a/10-monitoring-03-grafana/README.md b/10-monitoring-03-grafana/README.md index 5ac969c43..7df1efa3f 100644 --- a/10-monitoring-03-grafana/README.md +++ b/10-monitoring-03-grafana/README.md @@ -1,53 +1,43 @@ # Домашнее задание к занятию "14.Средство визуализации Grafana" -## Задание повышенной сложности - -**В части задания 1** не используйте директорию [help](./help) для сборки проекта, самостоятельно разверните grafana, где в -роли источника данных будет выступать prometheus, а сборщиком данных node-exporter: -- grafana -- prometheus-server -- prometheus node-exporter - -За дополнительными материалами, вы можете обратиться в официальную документацию grafana и prometheus. +## Обязательные задания -В решении к домашнему заданию приведите также все конфигурации/скрипты/манифесты, которые вы -использовали в процессе решения задания. +### Задание 1 +Сделал иначе, на работе поднял графану с такими же метриками как у заббикса +Решение домашнего задания - скриншот веб-интерфейса grafana со списком подключенных Datasource. -**В части задания 3** вы должны самостоятельно завести удобный для вас канал нотификации, например Telegram или Email -и отправить туда тестовые события. +![Графана](help/Grafana.png) -В решении приведите скриншоты тестовых событий из каналов нотификаций. +![Заббикс](help/Zabbix.png) +## Задание 2 and +## Задание 3 +В графане было добавлено еще две панельки с метриками это Asterisk Aktive Peers и входящий исходящий траффик от роутера Mikrotic -## Обязательные задания +Для решения ДЗ - приведите скриншот вашей итоговой Dashboard. -### Задание 1 -Используя директорию [help](./help) внутри данного домашнего задания - запустите связку prometheus-grafana. +![Asterisk Aktive Peers](help/asrerisk.png) -Зайдите в веб-интерфейс графана, используя авторизационные данные, указанные в манифесте docker-compose. +![Mikrotic](help/mikrotik.png) -Подключите поднятый вами prometheus как источник данных. +Так же настроена система оповещения в телеграмм-бот: -Решение домашнего задания - скриншот веб-интерфейса grafana со списком подключенных Datasource. +![Notification](help/notification.png) -## Задание 2 -Изучите самостоятельно ресурсы: -- [PromQL tutorial for beginners and humans](https://valyala.medium.com/promql-tutorial-for-beginners-9ab455142085) -- [Understanding Machine CPU usage](https://www.robustperception.io/understanding-machine-cpu-usage) -- [Introduction to PromQL, the Prometheus query language](https://grafana.com/blog/2020/02/04/introduction-to-promql-the-prometheus-query-language/) +![Message](help/messange.png) -Создайте Dashboard и в ней создайте следующие Panels: -- Утилизация CPU для nodeexporter (в процентах, 100-idle) -- CPULA 1/5/15 -- Количество свободной оперативной памяти -- Количество места на файловой системе -Для решения данного ДЗ приведите promql запросы для выдачи этих метрик, а также скриншот получившейся Dashboard. +Готовый скрипт для [telegram](https://github.com/NamorNinayzuk/mnt-homeworks/tree/MNT-video/10-monitoring-03-grafana/help/telegram.sh) -## Задание 3 -Создайте для каждой Dashboard подходящее правило alert (можно обратиться к первой лекции в блоке "Мониторинг"). +```#!/bin/bash -Для решения ДЗ - приведите скриншот вашей итоговой Dashboard. +token='<>' +chat="$1" +subj="$2" +message="$3" +/usr/bin/torsocks /usr/bin/curl -s --header 'Content-Type: application/json' --request 'POST' --data "{\"chat_id\":\"${chat}\",\"text\":\"${subj}\n${message}\"}" "https://api.telegram.org/bot${token}/sendMessage" +``` +Хде **<>** - уникальный id бота ## Задание 4 Сохраните ваш Dashboard. @@ -57,6 +47,10 @@ В решении задания - приведите листинг этого файла. +--- +Листинг и файл: +[Json](https://github.com/NamorNinayzuk/mnt-homeworks/edit/MNT-video/10-monitoring-03-grafana/gr.json) + --- ### Как оформить ДЗ? diff --git a/10-monitoring-03-grafana/gr.json b/10-monitoring-03-grafana/gr.json new file mode 100644 index 000000000..2da2296a8 --- /dev/null +++ b/10-monitoring-03-grafana/gr.json @@ -0,0 +1,695 @@ +{ + "annotations": { + "list": [ + { + "builtIn": 1, + "datasource": { + "type": "grafana", + "uid": "-- Grafana --" + }, + "enable": true, + "hide": true, + "iconColor": "rgba(0, 211, 255, 1)", + "name": "Annotations & Alerts", + "target": { + "limit": 100, + "matchAny": false, + "tags": [], + "type": "dashboard" + }, + "type": "dashboard" + } + ] + }, + "editable": true, + "fiscalYearStartMonth": 0, + "graphTooltip": 0, + "id": 5, + "links": [], + "liveNow": false, + "panels": [ + { + "ackEventColor": "rgb(56, 219, 156)", + "ackField": true, + "ageField": true, + "customLastChangeFormat": false, + "datasource": { + "type": "alexanderzobnin-zabbix-datasource", + "uid": "NuRJTfT4z" + }, + "description": "", + "descriptionAtNewLine": false, + "descriptionField": true, + "fontSize": "100%", + "gridPos": { + "h": 8, + "w": 20, + "x": 0, + "y": 0 + }, + "highlightBackground": false, + "highlightNewEvents": false, + "highlightNewerThan": "1h", + "hostField": true, + "hostGroups": true, + "hostProxy": false, + "hostTechNameField": false, + "id": 2, + "lastChangeFormat": "", + "layout": "table", + "markAckEvents": false, + "okEventColor": "rgb(56, 189, 113)", + "pageSize": 10, + "pluginVersion": "9.3.2", + "problemTimeline": true, + "resizedColumns": [ + { + "id": "host", + "value": 96 + }, + { + "id": "hostTechName", + "value": 171.5 + }, + { + "id": "groups", + "value": 145 + }, + { + "id": "age", + "value": 85 + }, + { + "id": "lastchange", + "value": 161 + } + ], + "schemaVersion": 8, + "severityField": true, + "showTags": false, + "sortProblems": "lastchange", + "statusField": true, + "statusIcon": false, + "targets": [ + { + "application": { + "filter": "" + }, + "datasource": { + "type": "alexanderzobnin-zabbix-datasource", + "uid": "NuRJTfT4z" + }, + "functions": [], + "group": { + "filter": "" + }, + "host": { + "filter": "/.*/" + }, + "item": { + "filter": "" + }, + "itemTag": { + "filter": "" + }, + "options": { + "acknowledged": 2, + "disableDataAlignment": false, + "hostProxy": false, + "hostsInMaintenance": false, + "limit": 1001, + "minSeverity": 0, + "showDisabledItems": false, + "skipEmptyValues": false, + "sortProblems": "default", + "useZabbixValueMapping": false + }, + "proxy": { + "filter": "" + }, + "queryType": "5", + "refId": "A", + "resultFormat": "time_series", + "showProblems": "problems", + "slaInterval": "none", + "slaProperty": { + "name": "SLA", + "property": "sla" + }, + "table": { + "skipEmptyValues": false + }, + "tags": { + "filter": "" + }, + "trigger": { + "filter": "" + }, + "triggers": { + "acknowledged": 2, + "count": true, + "minSeverity": 3 + } + } + ], + "title": "Problems and Warnings", + "triggerSeverity": [ + { + "$$hashKey": "object:90", + "color": "rgb(108, 108, 108)", + "priority": 0, + "severity": "Not classified", + "show": true + }, + { + "$$hashKey": "object:91", + "color": "rgb(120, 158, 183)", + "priority": 1, + "severity": "Information", + "show": true + }, + { + "$$hashKey": "object:92", + "color": "rgb(175, 180, 36)", + "priority": 2, + "severity": "Warning", + "show": true + }, + { + "$$hashKey": "object:93", + "color": "rgb(255, 137, 30)", + "priority": 3, + "severity": "Average", + "show": true + }, + { + "$$hashKey": "object:94", + "color": "rgb(255, 101, 72)", + "priority": 4, + "severity": "High", + "show": true + }, + { + "$$hashKey": "object:95", + "color": "rgb(215, 0, 0)", + "priority": 5, + "severity": "Disaster", + "show": true + } + ], + "type": "alexanderzobnin-zabbix-triggers-panel" + }, + { + "datasource": { + "type": "alexanderzobnin-zabbix-datasource", + "uid": "NuRJTfT4z" + }, + "fieldConfig": { + "defaults": { + "color": { + "mode": "palette-classic" + }, + "custom": { + "axisCenteredZero": false, + "axisColorMode": "text", + "axisLabel": "", + "axisPlacement": "auto", + "barAlignment": 0, + "drawStyle": "line", + "fillOpacity": 0, + "gradientMode": "none", + "hideFrom": { + "legend": false, + "tooltip": false, + "viz": false + }, + "lineInterpolation": "linear", + "lineWidth": 1, + "pointSize": 5, + "scaleDistribution": { + "type": "linear" + }, + "showPoints": "auto", + "spanNulls": false, + "stacking": { + "group": "A", + "mode": "none" + }, + "thresholdsStyle": { + "mode": "off" + } + }, + "mappings": [], + "min": 0, + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green" + }, + { + "color": "red", + "value": 80 + } + ] + }, + "unit": "none" + }, + "overrides": [] + }, + "gridPos": { + "h": 8, + "w": 4, + "x": 20, + "y": 0 + }, + "id": 6, + "options": { + "legend": { + "calcs": [ + "lastNotNull" + ], + "displayMode": "list", + "placement": "bottom", + "showLegend": true + }, + "tooltip": { + "mode": "single", + "sort": "none" + } + }, + "targets": [ + { + "application": { + "filter": "Asterisk" + }, + "datasource": { + "type": "alexanderzobnin-zabbix-datasource", + "uid": "NuRJTfT4z" + }, + "functions": [], + "group": { + "filter": "Linux servers" + }, + "host": { + "filter": "Asterisk" + }, + "item": { + "filter": "Active peers" + }, + "itemTag": { + "filter": "" + }, + "options": { + "disableDataAlignment": false, + "showDisabledItems": false, + "skipEmptyValues": false, + "useZabbixValueMapping": false + }, + "proxy": { + "filter": "" + }, + "queryType": "0", + "refId": "A", + "resultFormat": "time_series", + "table": { + "skipEmptyValues": false + }, + "tags": { + "filter": "" + }, + "trigger": { + "filter": "" + }, + "triggers": { + "acknowledged": 2, + "count": true, + "minSeverity": 3 + } + } + ], + "title": "Asterisk Active Peers", + "type": "timeseries" + }, + { + "datasource": { + "type": "alexanderzobnin-zabbix-datasource", + "uid": "NuRJTfT4z" + }, + "fieldConfig": { + "defaults": { + "color": { + "mode": "palette-classic" + }, + "custom": { + "axisCenteredZero": false, + "axisColorMode": "text", + "axisLabel": "", + "axisPlacement": "auto", + "barAlignment": 0, + "drawStyle": "line", + "fillOpacity": 0, + "gradientMode": "none", + "hideFrom": { + "legend": false, + "tooltip": false, + "viz": false + }, + "lineInterpolation": "linear", + "lineStyle": { + "fill": "solid" + }, + "lineWidth": 1, + "pointSize": 5, + "scaleDistribution": { + "type": "linear" + }, + "showPoints": "auto", + "spanNulls": false, + "stacking": { + "group": "A", + "mode": "none" + }, + "thresholdsStyle": { + "mode": "off" + } + }, + "mappings": [], + "thresholds": { + "mode": "percentage", + "steps": [ + { + "color": "green" + }, + { + "color": "red", + "value": 80 + } + ] + } + }, + "overrides": [ + { + "matcher": { + "id": "byName", + "options": "mikrotik: Ether 1 (TTK) Out" + }, + "properties": [ + { + "id": "color", + "value": { + "fixedColor": "orange", + "mode": "palette-classic" + } + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "mikrotik: Ether 1(TTK) In" + }, + "properties": [ + { + "id": "color", + "value": { + "fixedColor": "green", + "mode": "palette-classic" + } + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "mikrotik: Ether 0 (SL) In" + }, + "properties": [ + { + "id": "color", + "value": { + "fixedColor": "orange", + "mode": "fixed" + } + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "mikrotik: Ether 0 (SL) Out" + }, + "properties": [ + { + "id": "color", + "value": { + "fixedColor": "blue", + "mode": "fixed" + } + } + ] + } + ] + }, + "gridPos": { + "h": 8, + "w": 11, + "x": 0, + "y": 8 + }, + "id": 4, + "options": { + "legend": { + "calcs": [], + "displayMode": "list", + "placement": "bottom", + "showLegend": true + }, + "timezone": [ + "Europe/Samara" + ], + "tooltip": { + "mode": "single", + "sort": "none" + } + }, + "targets": [ + { + "application": { + "filter": "Network" + }, + "datasource": { + "type": "alexanderzobnin-zabbix-datasource", + "uid": "NuRJTfT4z" + }, + "functions": [], + "group": { + "filter": "Routerboard" + }, + "hide": false, + "host": { + "filter": "mikrotik" + }, + "item": { + "filter": "Ether 1(TTK) In" + }, + "itemTag": { + "filter": "" + }, + "options": { + "disableDataAlignment": false, + "showDisabledItems": false, + "skipEmptyValues": false, + "useZabbixValueMapping": false + }, + "proxy": { + "filter": "" + }, + "queryType": "0", + "refId": "A", + "resultFormat": "time_series", + "table": { + "skipEmptyValues": false + }, + "tags": { + "filter": "" + }, + "trigger": { + "filter": "" + }, + "triggers": { + "acknowledged": 2, + "count": true, + "minSeverity": 3 + } + }, + { + "application": { + "filter": "Network" + }, + "datasource": { + "type": "alexanderzobnin-zabbix-datasource", + "uid": "NuRJTfT4z" + }, + "functions": [], + "group": { + "filter": "Routerboard" + }, + "hide": false, + "host": { + "filter": "mikrotik" + }, + "item": { + "filter": "Ether 1 (TTK) Out" + }, + "itemTag": { + "filter": "" + }, + "options": { + "disableDataAlignment": false, + "showDisabledItems": false, + "skipEmptyValues": false, + "useZabbixValueMapping": false + }, + "proxy": { + "filter": "" + }, + "queryType": "0", + "refId": "B", + "resultFormat": "time_series", + "table": { + "skipEmptyValues": false + }, + "tags": { + "filter": "" + }, + "trigger": { + "filter": "" + }, + "triggers": { + "acknowledged": 2, + "count": true, + "minSeverity": 3 + } + }, + { + "application": { + "filter": "Network" + }, + "datasource": { + "type": "alexanderzobnin-zabbix-datasource", + "uid": "NuRJTfT4z" + }, + "functions": [], + "group": { + "filter": "Routerboard" + }, + "hide": false, + "host": { + "filter": "mikrotik" + }, + "item": { + "filter": "Ether 0 (SL) In" + }, + "itemTag": { + "filter": "" + }, + "options": { + "disableDataAlignment": false, + "showDisabledItems": false, + "skipEmptyValues": false, + "useZabbixValueMapping": false + }, + "proxy": { + "filter": "" + }, + "queryType": "0", + "refId": "C", + "resultFormat": "time_series", + "table": { + "skipEmptyValues": false + }, + "tags": { + "filter": "" + }, + "trigger": { + "filter": "" + }, + "triggers": { + "acknowledged": 2, + "count": true, + "minSeverity": 3 + } + }, + { + "application": { + "filter": "Network" + }, + "datasource": { + "type": "alexanderzobnin-zabbix-datasource", + "uid": "NuRJTfT4z" + }, + "functions": [], + "group": { + "filter": "Routerboard" + }, + "hide": false, + "host": { + "filter": "mikrotik" + }, + "item": { + "filter": "Ether 0 (SL) Out" + }, + "itemTag": { + "filter": "" + }, + "options": { + "disableDataAlignment": false, + "showDisabledItems": false, + "skipEmptyValues": false, + "useZabbixValueMapping": false + }, + "proxy": { + "filter": "" + }, + "queryType": "0", + "refId": "D", + "resultFormat": "time_series", + "table": { + "skipEmptyValues": false + }, + "tags": { + "filter": "" + }, + "trigger": { + "filter": "" + }, + "triggers": { + "acknowledged": 2, + "count": true, + "minSeverity": 3 + } + } + ], + "title": "Mikrotik IN & OUT", + "type": "timeseries" + } + ], + "refresh": "5s", + "schemaVersion": 37, + "style": "dark", + "tags": [], + "templating": { + "list": [] + }, + "time": { + "from": "now-1h", + "to": "now" + }, + "timepicker": {}, + "timezone": "", + "title": "Zabbix", + "uid": "DHaUbfoVk", + "version": 17, + "weekStart": "" +} diff --git a/10-monitoring-03-grafana/help/Grafana.png b/10-monitoring-03-grafana/help/Grafana.png new file mode 100644 index 000000000..8947cd349 Binary files /dev/null and b/10-monitoring-03-grafana/help/Grafana.png differ diff --git a/10-monitoring-03-grafana/help/Zabbix.png b/10-monitoring-03-grafana/help/Zabbix.png new file mode 100644 index 000000000..797ee9050 Binary files /dev/null and b/10-monitoring-03-grafana/help/Zabbix.png differ diff --git a/10-monitoring-03-grafana/help/asrerisk.png b/10-monitoring-03-grafana/help/asrerisk.png new file mode 100644 index 000000000..7caaed84b Binary files /dev/null and b/10-monitoring-03-grafana/help/asrerisk.png differ diff --git a/10-monitoring-03-grafana/help/messange.png b/10-monitoring-03-grafana/help/messange.png new file mode 100644 index 000000000..def4a2899 Binary files /dev/null and b/10-monitoring-03-grafana/help/messange.png differ diff --git a/10-monitoring-03-grafana/help/mikrotik.png b/10-monitoring-03-grafana/help/mikrotik.png new file mode 100644 index 000000000..e452a7668 Binary files /dev/null and b/10-monitoring-03-grafana/help/mikrotik.png differ diff --git a/10-monitoring-03-grafana/help/notification.png b/10-monitoring-03-grafana/help/notification.png new file mode 100644 index 000000000..f3cc39c74 Binary files /dev/null and b/10-monitoring-03-grafana/help/notification.png differ diff --git a/10-monitoring-03-grafana/help/telegram.sh b/10-monitoring-03-grafana/help/telegram.sh new file mode 100644 index 000000000..59c5bf877 --- /dev/null +++ b/10-monitoring-03-grafana/help/telegram.sh @@ -0,0 +1,8 @@ +#!/bin/bash + +token='<>' +chat="$1" +subj="$2" +message="$3" + +/usr/bin/torsocks /usr/bin/curl -s --header 'Content-Type: application/json' --request 'POST' --data "{\"chat_id\":\"${chat}\",\"text\":\"${subj}\n${message}\"}" "https://api.telegram.org/bot${token}/sendMessage" diff --git a/10-monitoring-05-sentry/README.md b/10-monitoring-05-sentry/README.md index 846f3cbc9..7eb691f17 100644 --- a/10-monitoring-05-sentry/README.md +++ b/10-monitoring-05-sentry/README.md @@ -14,8 +14,9 @@ Free cloud account имеет следующие ограничения: - нажжмите "Try for free" - используйте авторизацию через ваш github-account - далее следуйте инструкциям - +![Registration](img/first.png) Для выполнения задания - пришлите скриншот меню Projects. +![Projects](img/Projects.png) ## Задание 2 @@ -28,6 +29,8 @@ Free cloud account имеет следующие ограничения: Для выполнения задание предоставьте скриншот `Stack trace` из этого события и список событий проекта, после нажатия `Resolved`. +![Stack trace](img/Stack_trace.png) + ## Задание 3 Перейдите в создание правил алёртинга. @@ -45,17 +48,13 @@ Free cloud account имеет следующие ограничения: Для выполнения задания - пришлите скриншот тела сообщения из оповещения на почте. +![Alert](img/alert.png) + Дополнительно поэкспериментируйте с правилами алёртинга. Выбирайте разные условия отправки и создавайте sample events. -## Задание повышенной сложности - -Создайте проект на ЯП python или GO (небольшой, буквально 10-20 строк), подключите к нему sentry SDK и отправьте несколько тестовых событий. -Поэкспериментируйте с различными передаваемыми параметрами, но помните об ограничениях free учетной записи cloud Sentry. - -Для выполнения задания пришлите скриншот меню issues вашего проекта и -пример кода подключения sentry sdk/отсылки событий. - +--- +[Project_Sentry.json](img/sentry_project.json) --- ### Как оформить ДЗ? diff --git a/10-monitoring-05-sentry/img/Projects.png b/10-monitoring-05-sentry/img/Projects.png new file mode 100644 index 000000000..6634d92e9 Binary files /dev/null and b/10-monitoring-05-sentry/img/Projects.png differ diff --git a/10-monitoring-05-sentry/img/Stack_trace.png b/10-monitoring-05-sentry/img/Stack_trace.png new file mode 100644 index 000000000..5b9c0d733 Binary files /dev/null and b/10-monitoring-05-sentry/img/Stack_trace.png differ diff --git a/10-monitoring-05-sentry/img/alert.png b/10-monitoring-05-sentry/img/alert.png new file mode 100644 index 000000000..046bd191a Binary files /dev/null and b/10-monitoring-05-sentry/img/alert.png differ diff --git a/10-monitoring-05-sentry/img/first.png b/10-monitoring-05-sentry/img/first.png new file mode 100644 index 000000000..ef35d87c7 Binary files /dev/null and b/10-monitoring-05-sentry/img/first.png differ diff --git a/10-monitoring-05-sentry/img/sentry_project.json b/10-monitoring-05-sentry/img/sentry_project.json new file mode 100644 index 000000000..47ce109e3 --- /dev/null +++ b/10-monitoring-05-sentry/img/sentry_project.json @@ -0,0 +1 @@ +{"event_id":"e782d2f01c2d423b86d0e48523aa4a52","project":4505426809389056,"release":null,"dist":null,"platform":"python","message":"This is an example Python exception","datetime":"2023-06-26T16:35:44+00:00","tags":[["browser","Chrome 28.0.1500"],["browser.name","Chrome"],["client_os","Windows 8"],["client_os.name","Windows"],["environment","prod"],["level","error"],["sample_event","yes"],["user","id:1"],["server_name","web01.example.org"],["url","http://example.com/foo"]],"_metrics":{"bytes.stored.event":8148},"contexts":{"browser":{"name":"Chrome","version":"28.0.1500","type":"browser"},"client_os":{"name":"Windows","version":"8","type":"os"}},"culprit":"raven.scripts.runner in main","environment":"prod","extra":{"emptyList":[],"emptyMap":{},"length":10837790,"results":[1,2,3,4,5],"session":{"foo":"bar"},"unauthorized":false,"url":"http://example.org/foo/bar/"},"fingerprint":["{{ default }}"],"hashes":["3a2b45089d0211943e5a6645fb4cea3f"],"level":"error","logentry":{"formatted":"This is an example Python exception"},"logger":"","metadata":{"title":"This is an example Python exception"},"modules":{"my.package":"1.0.0"},"nodestore_insert":1687797405.613273,"received":1687797404.858802,"request":{"url":"http://example.com/foo","method":"GET","data":{"hello":"world"},"query_string":[["foo","bar"]],"cookies":[["foo","bar"],["biz","baz"]],"headers":[["Content-Type","application/json"],["Referer","http://example.com"],["User-Agent","Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.72 Safari/537.36"]],"env":{"ENV":"prod"},"inferred_content_type":"application/json"},"stacktrace":{"frames":[{"function":"build_msg","module":"raven.base","filename":"raven/base.py","abs_path":"/home/ubuntu/.virtualenvs/getsentry/src/raven/raven/base.py","lineno":303,"pre_context":[" frames = stack",""," data.update({"," 'sentry.interfaces.Stacktrace': {"," 'frames': get_stack_info(frames,"],"context_line":" transformer=self.transform)","post_context":[" },"," })",""," if 'sentry.interfaces.Stacktrace' in data:"," if self.include_paths:"],"in_app":false,"vars":{"'culprit'":null,"'data'":{"'message'":"u'This is a test message generated using ``raven test``'","'sentry.interfaces.Message'":{"'message'":"u'This is a test message generated using ``raven test``'","'params'":[]}},"'date'":"datetime.datetime(2013, 8, 13, 3, 8, 24, 880386)","'event_id'":"'54a322436e1b47b88e239b78998ae742'","'event_type'":"'raven.events.Message'","'extra'":{"'go_deeper'":[["{\"'bar'\":[\"'baz'\"],\"'foo'\":\"'bar'\"}"]],"'loadavg'":[0.37255859375,0.5341796875,0.62939453125],"'user'":"'dcramer'"},"'frames'":"","'handler'":"","'k'":"'sentry.interfaces.Message'","'kwargs'":{"'level'":20,"'message'":"'This is a test message generated using ``raven test``'"},"'public_key'":null,"'result'":{"'message'":"u'This is a test message generated using ``raven test``'","'sentry.interfaces.Message'":{"'message'":"u'This is a test message generated using ``raven test``'","'params'":[]}},"'self'":"","'stack'":true,"'tags'":null,"'time_spent'":null,"'v'":{"'message'":"u'This is a test message generated using ``raven test``'","'params'":[]}}},{"function":"capture","module":"raven.base","filename":"raven/base.py","abs_path":"/home/ubuntu/.virtualenvs/getsentry/src/raven/raven/base.py","lineno":459,"pre_context":[" if not self.is_enabled():"," return",""," data = self.build_msg("," event_type, data, date, time_spent, extra, stack, tags=tags,"],"context_line":" **kwargs)","post_context":[""," self.send(**data)",""," return (data.get('event_id'),)",""],"in_app":false,"vars":{"'data'":null,"'date'":null,"'event_type'":"'raven.events.Message'","'extra'":{"'go_deeper'":[["{\"'bar'\":[\"'baz'\"],\"'foo'\":\"'bar'\"}"]],"'loadavg'":[0.37255859375,0.5341796875,0.62939453125],"'user'":"'dcramer'"},"'kwargs'":{"'level'":20,"'message'":"'This is a test message generated using ``raven test``'"},"'self'":"","'stack'":true,"'tags'":null,"'time_spent'":null}},{"function":"captureMessage","module":"raven.base","filename":"raven/base.py","abs_path":"/home/ubuntu/.virtualenvs/getsentry/src/raven/raven/base.py","lineno":577,"pre_context":[" \"\"\""," Creates an event from ``message``.",""," >>> client.captureMessage('My event just happened!')"," \"\"\""],"context_line":" return self.capture('raven.events.Message', message=message, **kwargs)","post_context":[""," def captureException(self, exc_info=None, **kwargs):"," \"\"\""," Creates an event from an exception.",""],"in_app":false,"vars":{"'kwargs'":{"'data'":null,"'extra'":{"'go_deeper'":["[{\"'bar'\":[\"'baz'\"],\"'foo'\":\"'bar'\"}]"],"'loadavg'":[0.37255859375,0.5341796875,0.62939453125],"'user'":"'dcramer'"},"'level'":20,"'stack'":true,"'tags'":null},"'message'":"'This is a test message generated using ``raven test``'","'self'":""}},{"function":"send_test_message","module":"raven.scripts.runner","filename":"raven/scripts/runner.py","abs_path":"/home/ubuntu/.virtualenvs/getsentry/src/raven/raven/scripts/runner.py","lineno":77,"pre_context":[" level=logging.INFO,"," stack=True,"," tags=options.get('tags', {}),"," extra={"," 'user': get_uid(),"],"context_line":" 'loadavg': get_loadavg(),","post_context":[" },"," ))",""," if client.state.did_fail():"," print('error!')"],"in_app":false,"vars":{"'client'":"","'data'":null,"'k'":"'secret_key'","'options'":{"'data'":null,"'tags'":null}}},{"function":"main","module":"raven.scripts.runner","filename":"raven/scripts/runner.py","abs_path":"/home/ubuntu/.virtualenvs/getsentry/src/raven/raven/scripts/runner.py","lineno":112,"pre_context":[" print(\"Using DSN configuration:\")"," print(\" \", dsn)"," print()",""," client = Client(dsn, include_paths=['raven'])"],"context_line":" send_test_message(client, opts.__dict__)","in_app":false,"vars":{"'args'":["'test'","'https://ebc35f33e151401f9deac549978bda11:f3403f81e12e4c24942d505f086b2cad@sentry.io/1'"],"'client'":"","'dsn'":"'https://ebc35f33e151401f9deac549978bda11:f3403f81e12e4c24942d505f086b2cad@sentry.io/1'","'opts'":"","'parser'":"","'root'":""}}]},"timestamp":1687797344.855,"title":"This is an example Python exception","type":"default","user":{"id":"1","email":"sentry@example.com","ip_address":"127.0.0.1","username":"sentry","name":"Sentry","geo":{"country_code":"AU","city":"Melbourne","region":"VIC"}},"version":"5","location":null} diff --git a/10-monitoring-06-incident-management/README.md b/10-monitoring-06-incident-management/README.md index 6ceff5d1d..505fd9a1b 100644 --- a/10-monitoring-06-incident-management/README.md +++ b/10-monitoring-06-incident-management/README.md @@ -1,17 +1,49 @@ -# Домашнее задание к занятию "17.Инцидент-менеджмент" - -## Задание 1 - -Составьте постмортем, на основе реального сбоя системы Github в 2018 году. - -Информация о сбое доступна [в виде краткой выжимки на русском языке](https://habr.com/ru/post/427301/) , а -также [развёрнуто на английском языке](https://github.blog/2018-10-30-oct21-post-incident-analysis/). - +# Домашнее задание по лекции "10.6. Инцидент-менеджмент" + +## Постановка задачи + +> Составьте постмотрем, на основе реального сбоя системы Github в 2018 году. +> +> Информация о сбое доступна [в виде краткой выжимки на русском языке](https://habr.com/ru/post/427301/), а +> также [развёрнуто на английском языке](https://github.blog/2018-10-30-oct21-post-incident-analysis/). + +Краткое описание разделов (из презентации): + - Краткое описание инцидента - краткая выжимка о инциденте + - Предшествующие события - что произошло перед инцидентом + - Причина инцидента - из-за чего возник инцидент + - Воздействие - на что повлиял инцидент + - Обнаружение - когда и как инцидент был обнаружен + - Реакция - кто ответил на инцидент, кто был привлечен, какие каналы коммуникации были задействованы + - Восстановление - описание действий по устранению инцидента и поведение системы + - Таймлайн - последовательное описание ключевых событий инцидента с указанием времени + - Последующие действия - что нужно предпринять, чтобы инцидент не повторялся + +## Решение + +Раздел | Сбой системы GitHub 21 октября 2018 + --- | --- +Краткое описание инцидента | После непродолжительной потери связи между сетевым узлом и датацентром нарушилась топология кластеров Оркестратора, что привело к деградированию системы +Предшествующие события | Плановые работы по замене вышедшего из строя 100G оптического оборудования в дата-центре на восточном побережье США +Причина инцидента | В результате временной потери связи кластеры MySQL перешли в "неожиданное" для Оркестратора состояние (только западные). Нормальное состояние: ![normal](https://github.blog/wp-content/uploads/2018/10/normal-topology.png) Состояние после сбоя: ![bad](https://github.blog/wp-content/uploads/2018/10/invalid-topology.png) +Воздействие | Отображение устаревшей и непоследовательной информации. Также в большинстве случаев не функционировали **WebHook** и сборка и публикация **GitHub Pages** +Обнаружение | Инцидент был замечен дежурными инженерами через оповещения системы мониторинга +Реакция | Группа реагирования провела минимизацию накопления сложности инцидента (отключила часть сервисов для минимизации изменений БД), после чего подключились к работе координатор и дополнительные разработчики из инженерной группы БД. Из-за приоритета целостности данных общее время восстановления составило 24 часа 11 минут +Восстановление | Для исключения потери данных временно отключили часть сервисов для мининизации накопления изменений, после чего восстановили данные из резервных копий, а на них реплицировали накопленные. Далее вручную переконфигурировали топологию кластеров БД и включили деактивированный функционал. Завершающим этапом дождались отработки накопленных задач для чего даже потребовалось временно развернуть дополнительные реплики на чтение. ![recovery](https://github.blog/wp-content/uploads/2018/10/recovery-flow.png) +Таймлайн | **21 октября 2018, 22:52 UTC** - Нарушилась связь между сетевым узлом и основным дата-центром на восточном побережье США
**21 октября 2018, 22:52 UTC** - Оркестратор в основном дата-центре запустил процесс выбора нового лидера, в результате чего начал создавать топологию кластера БД на западе. После восстановления подключения приложения направили трафик по записи на новые основные сервера на западе. Таким образом на обоих серверах образовались данные, отсутствующие у другого и не получилось вернуть первичный сервер не восток
**21 октября 2018, 22:54 UTC** - Внутренние системы мониторинга начали генерировать оповещения о многочисленных сбоях в работе систем
**21 октября 2018, 23:02 UTC** - Инженеры первой группы реагирования определили, что топологии для многочисленных кластеров БД находятся в неожиданном состоянии - при запросе API Оркестратора отображалась топология репликации БД, содержащая только серверы из западного ЦОД.
**21 октября 2018, 23:07 UTC** - Группа реагирования вручную заблокировала внутренние средства развёртывания, чтобы предотвратить внесение дополнительных изменений в БД
**21 октября 2018, 23:09 UTC** - Группа реагирования установила жёлтый статус работоспособности сайта (статус активного инцидента)
**21 октября 2018, 23:11 UTC** - Координатор присоединился к работе и через две минуты принял решение изменить статус сайта на красный.
**21 октября 2018, 23:13 UTC** - К работе привлекли дополнительных разработчиков из инженерной группы БД, которые начали исследовать текущее состояние для определения необходимых действий для ручной перенастроийки БД
**21 октября 2018, 23:19 UTC** - В целях сохранности данных принято решение об остановке выполнения заданий, которые пишут метаданные типа пуш-запросов
**22 октября 2018, 00:05 UTC** - Инженеры из группы реагирования начали разрабатывать план устранения несогласованности данных и запустили процедуры отработки отказа для MySQL
**22 октября 2018, 00:41 UTC** - Инициирован процесс резервного копирования для всех затронутых кластеров MySQL. Одновременно нескольок групп инженеров изучали способы ускорения передачи и восстановления без дальнейшей деградации сайта или риска повреждения данных
**22 октября 2018, 06:51 UTC** - Несколько кластеров на востоке завершили восстановление из резервных копий и начали реплицировать новые данные с Западным побережьем
**22 октября 2018, 07:46 UTC** - Публикация информационного сообщение об инциденте
**22 октября 2018, 11:12 UTC** - Все первичные БД вновь переведены на Восток, но не смотря на возросшую производительность сайта некоторые данные всё ещё отставали на несколько часов из-за отложенных реплик
**22 октября 2018, 13:15 UTC** - Зафиксировано возрастание отставания репликации до согласованного состояния (время увеличивалось, а не уменьшалось). Начали подготовку дополнительных реплик чтения MySQL в общедоступном облаке Восточного побережья
**22 октября 2018, 16:24 UTC** - Завершена синхронизация реплик - возврат к исходной топологии. Устранинены проблемы задержки и доступности. Однако, красный статус сохранён, так как нужно ещё обработать накопленные данные.
**22 октяюря 2018, 16:45 UTC** - Включение всех дэактивированных функций. Корректировка TTL более 200000 истекших за время восстановления задач из более 5 миллионов.
**22 октября 2018, 23:03 UTC** - Все незавершённые события **webhook** и сборки **Pages** обработаны, а целостность и правильная работа всех систем подтверждена. Статус сайта изменён на зелёный +Последующие действия | Введена системная практика проверки сценариев сбоев, предже чем они возникнут в реальности (**Chaos Engineering**) --- -### Как оформить ДЗ? - -Выполненное домашнее задание пришлите ссылкой на .md-файл в вашем репозитории. - ---- +> Пример из презентации +> +> Раздел | Описание +> --- | --- +> Краткое описание инцидента | Около 2х часов ночи увеличилась утилизация ЦПУ на сервере. Вследствие этого часть сервисов стала недоступна. +> Предшествующие события | Была установка патча 1.1 на сервис pretty +> Причина инцидента | В патче 1.1 был допущен баг, в котором происходила многократная повторяющаяся запись на диск файлов. +> Воздействие | Также на сервере с сервисом pretty был сервис корзины интернет магазина. Заказ товаров был недоступен для 100% пользователей в течении 15 минут. +> Обнаружение | Инцидент был замечен дежурным инженером. Затем были привлечены ответственные разработчики. +> Реакция | Ответственные разработчики устранили инцидент за 15 минут. +> Восстановление | Был установлен минорный патч 1.1.1, устраняющий данную проблему и система перешла к штатной работе. Нагрузка сервера упала сразу после установки патча, корзина стала доступна для заказов +> Таймлайн | 01:55 прилетел алёрт о утилизации ЦПУ
01:57 дежурный инженер проинформировал о сбое ответственные лица
02:00 была проведена инспекция кода последнего патча и найдена проблема
02:05 был сделан коммит, устраняющий проблему
02:09 прошла автоматическая сборка и выкатка сервиса pretty
02:10 система заработала в штатном режиме +> Последующие действия | Была заведена задача в issue tracker (ZADCH-123) по добавлению дополнительных модульных тестов на узкое место кода.