I have problem with ansible. I need get version of apache with ansible, and I'm using command "httpd -v" in /bin/ folder of apache.
Now, I've got output looks like "Server version: Apache/2.4.48 (Win64)
Apache Lounge VS16 Server built: May 18 2021 10:45:56".
So, can you help me please? I tried to use "regex" and there was still error.
Register the result of the command
- command: httpd -v
register: result
gives, for example,
result.stdout: |-
Server version: Apache/2.4.46 (FreeBSD)
Server built: unknown
This is actually a YAML dictionary. Let's keep it in a variable. For example,
apache: "{{ result.stdout|from_yaml }}"
gives
apache:
Server built: unknown
Server version: Apache/2.4.46 (FreeBSD)
Now, you can reference the attributes. For example,
apache['Server version']: Apache/2.4.46 (FreeBSD)
and split the version
apache_version: "{{ apache['Server version']|split(' ')|first|
split('/')|last }}"
gives
apache_version: 2.4.46
Example of a complete playbook
- hosts: srv
vars:
apache: "{{ result.stdout|from_yaml }}"
apache_version: "{{ apache['Server version']|split(' ')|first|
split('/')|last }}"
tasks:
- command: httpd -v
register: result
- debug:
var: apache_version
using regex_search:
---
- hosts: localhost
tasks:
- name: Run httpd command
shell: httpd -v
register: httpd_version
- name: show the extracted output
debug:
msg: "{{ httpd_version.stdout |regex_search('Server version.*?/([^ ]+).*','\\1') }}"
Related
I created a role that has in template folder two files: docker-compose.yml.j2 and env.j2
env.j2 is used in docker-compose file:
version: "2"
services:
service_name:
image: {{ IMAGE | mandatory }}
container_name: service_name
mem_limit: 256m
user: "2001"
env_file: ".env"
Now my question: is there some ansible module that sends docker-compose file to host and there validate it because than env and docker-compose are in same folder on host machine?
This example of ansible task return error because env file is not in template folder but on host.
- name: "Copy env file"
ansible.builtin.template:
src: "env.j2"
dest: "/opt/db_backup/.env"
mode: '770'
owner: deployment
group: deployment
- name: "Validate and copy docker compose file"
ansible.builtin.template:
src: "docker-compose.yml.j2"
dest: "/opt/db_backup/docker-compose.yml"
mode: '770'
owner: deployment
group: deployment
validate: docker-compose -f %s config
This probably falls into the Complex validation configuration cases linked in the documentation for the template module validate parameter
In any case, unless refactoring completely your current file and passing more variables in your environment (e.g. to allow .env being in a location out of the current directory), you cannot validate docker-compose.yml until both files are in the same location.
An easy scenario would be to copy both files in place, validate prior to doing anything with them and roll back to the previous version in case of error. The below example is far from rocket proof but will give you an idea:
---
- hosts: localhost
gather_facts: false
vars:
IMAGE: alpine:latest
deploy_dir: /tmp/validate_compose
tasks:
- name: "make sure {{ deploy_dir }} directory exits"
file:
path: "{{ deploy_dir }}"
state: directory
- name: copy project file templates
template:
src: "{{ item }}"
dest: "{{ deploy_dir }}/{{ item | regex_replace('^(.*)\\.j2', '\\g<1>') }}"
mode: 0640
backup: true
loop:
- .env.j2
- docker-compose.yml.j2
register: copy_files
- block:
- name: check docker-compose file validity
command:
cmd: docker-compose config
chdir: "{{ deploy_dir }}"
rescue:
- name: rollback configuration to previous version for changed files
copy:
src: "{{ item.backup_file }}"
dest: "{{ item.dest }}"
remote_src: true
loop: "{{ copy_files.results | selectattr('backup_file', 'defined') }}"
- name: Give some info about error.
debug:
msg:
- The compose file did not validate.
- Please see previous error above for details
- Files have been rolled back to the latest known version.
- name: Fail
fail:
- name: Rest of the playbook using the above validated files
debug:
msg: Next tasks...
This is a continuation of my journey of creating multiple docker projects dynamically. I did not mention previously, to make this process dynamica as I want devs to specify what project they want to use, I'm using ansible to up local env.
Logic is:
running ansible-playbook run.yml -e "{projectsList:
['app-admin']}" - providing list of projects I want to start
stop existing main containers (in case they are running from the previous time)
Start the main containers
Depend on the provided list of projects run role tasks () I have a separate role for each supported project
stop the existing child project containers (in case they are running from the previous time)
start the child project containers
make some configuration depend on the role
And here is the issue (again) with the network, when I stop the main containers it's failing with a message:
error while removing network: network appnetwork has active endpoints
it makes sense as child docker containers use the same network, but I do not see so far way to change ordering of tasks as I'm using the roles, so main docker tasks always running before role-specific tasks.
main ansible file:
---
#- import_playbook: './services/old.yml'
- hosts: localhost
gather_facts: true
vars:
# add list of all supported projects, THIS SHOULD BE UPDATED FOREACH NEW PROJECT!
supportedProjects: ['all', 'app-admin', 'app-landing']
vars_prompt:
- name: "ansible_become_pass"
prompt: "Sudo password"
private: yes
pre_tasks:
# List of projects should be provided
- fail: msg="List of projects you want to run playbook for not provided"
when: (projectsList is not defined) or (projectsList|length == 0)
# Remove unsupported projects from list
- name: Filter out not supported projects
set_fact:
filteredProjectsList: "{{ projectsList | intersect(supportedProjects) }}"
# Check if any of projects exist after filtering
- fail: msg="All project you provided not supported. Supported projects {{ supportedProjects }}"
when: filteredProjectsList|length == 0
# Always stop existing docker containers
- name: stop existing common app docker containers
docker_compose:
project_src: ../docker/common/
state: absent
- name: start common app docker containers like nginx proxy, redic, mailcatcher etc. (this can take a while if running by the first time)
docker_compose:
project_src: ../docker/common/
state: present
build: no
nocache: no
- name: Get www-data id
command: docker exec app-php id -u www-data
register: wwwid
- name: Get current user group id
command: id -g
register: userid
- name: Register user and www-data ids
set_fact:
userid: "{{userid.stdout}}"
wwwdataid: "{{wwwid.stdout}}"
roles:
- { role: app-landing, when: '"app-landing" in filteredProjectsList or "all" in filteredProjectsList' }
- { role: app-admin, when: ("app-admin" in filteredProjectsList) or ("all" in filteredProjectsList) }
and role example app-admin/tasks/mian.yml:
---
- name: Sync {{name}} with git (can take while to clone repo by the first time)
git:
repo: "{{gitPath}}"
dest: "{{destinationPath}}"
version: "{{branch}}"
- name: stop existing {{name}} docker containers
docker_compose:
project_src: "{{dockerComposeFileDestination}}"
state: absent
- name: start {{name}} docker containers (this can take a while if running by the first time)
docker_compose:
project_src: "{{dockerComposeFileDestination}}"
state: present
build: no
nocache: no
- name: Copy {{name}} env file
copy:
src: development.env
dest: "{{destinationPath}}.env"
force: no
- name: Set file permissions for local {{name}} project files
command: chmod -R ug+w {{projectPath}}
become: yes
- name: Set execute permissions for local {{name}} bin folder
command: chmod -R +x {{projectPath}}/bin
become: yes
- name: Set user/group for {{name}} to {{wwwdataid}}:{{userid}}
command: chown -R {{wwwdataid}}:{{userid}} {{projectPath}}
become: yes
- name: Composer install for {{name}}
command: docker-compose -f {{mainDockerComposeFileDestination}}docker-compose.yml exec -T app-php sh -c "cd {{containerProjectPath}} && composer install"
Maybe there is a way to somehow unlink the network if the main container stop. I thought when a child container network set like external:
networks:
appnetwork:
external: true
solves the issue, but it's not.
A quick experiment with an external network:
dc1/dc1.yml
version: "3.0"
services:
nginx:
image: nginx
ports:
- "8080:80"
networks:
- an0
networks:
an0:
external: true
dc2/dc2.yml
version: "3.0"
services:
redis:
image: redis
ports:
- "6379:6379"
networks:
- an0
networks:
an0:
external: true
Starting and stopping:
$ docker network create -d bridge an0
1e07251e32b0d3248b6e70aa70a0e0d0a94e457741ef553ca5f100f5cec4dea3
$ docker-compose -f dc1/dc1.yml up -d
Creating dc1_nginx_1 ... done
$ docker-compose -f dc2/dc2.yml up -d
Creating dc2_redis_1 ... done
$ docker-compose -f dc1/dc1.yml down
Stopping dc1_nginx_1 ... done
Removing dc1_nginx_1 ... done
Network an0 is external, skipping
$ docker-compose -f dc2/dc2.yml down
Stopping dc2_redis_1 ... done
Removing dc2_redis_1 ... done
Network an0 is external, skipping
I have a variable which i need to parse to pull out a version string, is there a way to do this? Below is an example of the ansible variable.
--xxx 1.2.3-102 --yyy 2.5.10-47 --zzz 10.4.2-193
Update: Adding ansible task format
---
- hosts: localhost
tasks:
- name: Get Version
shell: echo '{{ version }}'
register: results
- set_fact:
value: "{{ results.stdout | regex_search(regexp,'') }}"
vars:
regexp: ''
- debug:
var: value
getting just the version number after "--yyy", alter the regular expression as needed for your task:
- hosts: localhost
tasks:
- name: Get Version
shell: echo '--xxx 1.2.3-102 --yyy 2.5.10-47 --zzz 10.4.2-193'
register: results
- name: set regex
set_fact:
re: '--yyy\s+(?P<digit>\d+\.\d+\.\d+-\d+)'
- set_fact:
value: "{{ results.stdout | regex_search(re, '\\g<digit>') }}"
- debug:
var: value[0]
|m-p solution works great for the latest version of ansible, unfortunately I have to use ansible pre 2.0 (1.9.6) which doesn't seem to support regex_search for some strange reason.
In that case I will use the following
"{{ results | regex_replace ('((xxx|yyy)\\s[\\S]+)|(--|zzz|\\s)','') | join }}"
I am trying to build docker container which should include startup scripts in container's /etc/my_init.d directory via ansible. I have difficulty finding any documentation how to do this. Here is relevant portion of my yaml file:
- name: Create container
docker:
name: myserver
image: "{{ docker_repo }}/myserver:{{ server.version }}"
state: started
restart_policy: always
docker_api_version: 1.18
registry: "{{ docker_repo }}"
username: "{{ registry_user }}"
password: "{{ registry_password }}"
links:
- "mywebservices"
ports:
- "8000:8000"
- "9899:9899"
volumes:
- "{{ myserver_home_dir }}/logs:/var/log/my_server"
env:
MY_ENVIRONMENT: "{{ my_environment }}"
when: myserver_action == "create"
or (myserver_action == "diff-create" and myserver.changed)
or myserver_action == "update"
What should I add in here to tell ansible to put my files into container's /etc/my_init.d during build?
First of all, you can't build container (you can start it), you build images.
Second, docker module is deprecated, use docker_image to build images.
You should copy your files into build directory (with copy or synchronize modules), for example:
/tmp/build
Then create Dockerfile that will take them from build directory and add into your image.
After that call docker_image:
docker_image:
path: /tmp/build
name: myimage
Finally start your container:
docker_container:
image: myimage
name: mycontainer
Unsure if it's relevant, as I don't know what your startup Ansible content is doing, but it's probably worth looking at the Ansible Container project.
https://github.com/ansible/ansible-container
You can build your container images using Ansible roles instead of a Dockerfile, orchestrate them locally, and deploy them to production Kubernetes or Red Hat OpenShift.
I have a problem when I launch a Ansible role for to install Docker in a CentOS 7 VM.
When the docker-login task runs I have the following error:
"msg": "Docker API Error: client is newer than server (client API version: 1.24, server API version: 1.22)"
And this is the Ansible role:
- name: Install python setup tools
yum: name=python-setuptools
tags: docker
- name: Install Pypi
easy_install: name=pip
tags: docker
- name: Install docker-py
pip: name=docker-py
tags: docker
- name: Install Docker
yum: name=docker state=latest
tags: docker
- name: Make sure Docker is running
service: name=docker state=running
tags: docker
- include: setup.yml
- name: login to private Docker remote registry and force reauthentification
docker_login:
registry: "{{ item.insecure_registry }}"
username: "{{ item.registry_user }}"
password: "{{ item.registry_password }}"
reauth: yes
with_items:
- "{{private_docker_registry}}"
when: private_docker_registry is defined
This installs docker 1.10.3 version with API version 1.22.
Add the api_version argument to the docker-login module:
- name: login to private Docker remote registry and force reauthentification
docker_login:
registry: "{{ item.insecure_registry }}"
username: "{{ item.registry_user }}"
password: "{{ item.registry_password }}"
reauth: yes
api_version: 1.22
with_items:
- "{{private_docker_registry}}"
when: private_docker_registry is defined