how to change docker image source in Openstack kolla? - docker

hi i am trying to install cloud-kitty service using open stack kolla-ansible , during the installation when it try to pull container / images from dockerhub it throw error The container image not found cloudkitty-api:ussuri , when i check on docker hub the required image not present there but there are
same images present with this name objectiflibre/cloudkitty-api .what/how should i change in kolla-ansible so that it pull specify image for cloudkitty.
I try to change the cloudkitty default/main.yml but i dont understand how should i change or what should i change in this.
/root/kolla-openstack/share/kolla-ansible/ansible/roles/cloudkitty/defaults/main.yml
####################
# Docker
####################
cloudkitty_install_type: "{{ kolla_install_type }}"
cloudkitty_tag: "{{ openstack_tag }}"
cloudkitty_api_image: "{{ docker_registry ~ '/' if docker_registry else '' }}{{ docker_namespace }}/{{ kolla_base_distro }}-{{ cloudkitty_install_type }}-cloudkitty-api"
cloudkitty_api_tag: "{{ cloudkitty_tag }}"
cloudkitty_api_image_full: "{{ cloudkitty_api_image }}:{{ cloudkitty_api_tag }}"
my globals.yml
###############
# Kolla options
###############
# Valid options are [ COPY_ONCE, COPY_ALWAYS ]
config_strategy: "COPY_ALWAYS"
# Valid options are ['centos', 'debian', 'rhel', 'ubuntu']
kolla_base_distro: "ubuntu"
# Valid options are [ binary, source ]
kolla_install_type: "binary"
# Do not override this unless you know what you are doing.
openstack_release: "ussuri"
# Docker image tag used by default.
#openstack_tag: "{{ openstack_release ~ openstack_tag_suffix }}"
enable_cloudkitty: "yes"
enable_horizon_cloudkitty: "{{ enable_cloudkitty | bool }}"
cloudkitty_collector_backend: "gnocchi"
cloudkitty_storage_backend: "influxdb"

I would try first to pull the image to my local repo first:
#docker pull objectiflibre/cloudkitty-api
then try to set the correct variables in :
/root/kolla-openstack/share/kolla-ansible/ansible/group_vars/all.yml
docker_registry
docker_namespace
kolla_base_distro
cloudkitty_install_type

Related

Ansible copy file to remote instance error

I have a Ansible playbook that run by Jenkins on server A, all it does is to copy a file folder from server A to a remote server B:
The code looks like this:
- hosts: "{{ lookup('env', 'REMOTE_HOST') }}"
any_errors_fatal: true # fail all hosts if any host fails
become: true
become_method: sudo
become_user: root
gather_facts: yes
vars:
date_time: "{{ lookup('env', 'DATE_TIME') }}"
version_main: 10.7
sql_dir: "~/code/data/sql/sql_data/{{ version_main }}"
remote_files_directory: "~/deploy/{{ version_main }}/{{ date_time }}/files"
tasks:
- name: "Copy SQL files from {{ sql_dir }} to {{ remote_files_directory }} on the remote host"
become: yes
become_user: user_a
ansible.builtin.copy:
src: "{{ sql_dir }}"
dest: "{{ remote_files_directory }}"
remote_src: true
All the SQL files are on server A, under user_a's home directory: ~/code/data/sql/sql_data/{{ version_main }}, and I want to copy them to server B(REMOTE_HOST) under the same user_a's home: ~/deploy/{{ version_main }}/{{ date_time }}/files
Variables REMOTE_HOST, DATE_TIME are from Jenkins.
The error I am getting with remote_src: true is:
[0;31 fatal: [server_B]: FAILED! => {"changed": false, "msg": "Source /home/user_a/code/data/sql/sql_data/10.7/ not found"}
If I set remote_src: false, I get this error:
[0;31 fatal: [server_B]: FAILED! => {"changed": false, "msg": "Could not find or access '~/code/data/sql/sql_data/10.7' on the Ansible Controller. If you are using a module and expect the file to exist on the remote, see the remote_src option"}
I even added the delegate_to: "{{ lookup('env', 'DEPLOY_DATAMONTH_HOST') }}" and it does not make any differences.
Somehow it can not figure out the source file folder on server A which is where the Ansible and Jenkins run.
This is the Ansible version I have on server A.
ansible --version
ansible [core 2.13.7]
Usser jenkins can not access /home/user_a/code/data/sql/sql_data/10.7/ directly, but jenkins can sudo su - user_a, so I think
become: yes
become_user: user_a
should have helped.
What am I still missing?
I believe that defining this at the host level with the following code block:
become: true
become_method: sudo
become_user: root
takes priority over your attempt to become user_a in the tasks with the following code block:
become: yes
become_user: user_a

Remove all images using Ansible module

I know how remove all images using shell:
- shell: |
docker stop $(docker ps -a -q)
docker rm $(docker ps -a -q)
docker rmi $(docker images -q)
exit 0
Also I can remove image with docker_image by name:
- name: Remove image
docker_image:
state: absent
name: {{image_name}}
But, is there a way to remove all images using docker_image module or any other Ansible module for working with docker?
You will have to do it in multiple steps, like your sub-shells are doing it:
list docker objects, including images and containers, with the module docker_host_info
remove all those containers, with the module docker_container
remove all those images, with the module docker_image module
So, in Ansible:
- docker_host_info:
images: yes
containers: yes
register: docker_objects
- docker_container:
name: "{{ item.Id }}"
state: absent
loop: "{{ docker_objects.containers }}"
loop_control:
label: "{{ item.Id }}"
- docker_image:
name: item.RepoTags | first
state: absent
force_absent: yes
loop: "{{ docker_objects.images }}"
loop_control:
label: "{{ item.RepoTags | first }}"

Ansible Loop Register

I have an ansible playbook to create new users and create a docker for each user. The information of the users is gathered from a yaml file there are more that 200 records. The yaml file consists of username and password. To create docker container I have to give PUID and PGID of the user to the docker run command. See below for example command
docker run --restart=always -it --init -td -p {port from ansible}:8443 -e PUID={PUID from ansible} -e PGID={PGID from Ansible} linuxserver/code-server:latest
I want to get PUID and PGID of a user and register it to a variable to use them to create docker container. I tried to use the following command but since it appends the output to a variable dictionary, I am not able to match the username with PUID/PGID.
- name: GET PUID
shell: id -u "{{ item.username }}"
loop: "{{ user }}"
register: puid
- name: pgid Variable
debug: msg="{{ pgid.stdout }}"
YAML for user
user:
- username: john.doe
password: password
The docker image that I want to use: https://hub.docker.com/r/linuxserver/code-server
For example, get the UID and GID of user admin at test_11
shell> ssh admin#test_11 id admin
uid=1001(admin) gid=1001(admin) groups=1001(admin)
Use the module getent. The playbook below
- hosts: test_11
tasks:
- getent:
database: passwd
- set_fact:
admin_uid: "{{ ansible_facts.getent_passwd.admin.1 }}"
admin_gid: "{{ ansible_facts.getent_passwd.admin.2 }}"
- debug:
msg: |
admin_uid: {{ admin_uid }}
admin_gid: {{ admin_gid }}
gives (abridged)
TASK [debug] ********************************************************
ok: [test_11] =>
msg: |-
admin_uid: 1001
admin_gid: 1001

How to force Ansible to recreate a docker container if mounted files have changed

I'm trying to get Ansible to recreate an existing docker container in case one of the mounted files have changed. I tried to use docker_containerto remove the container, if it exists and any file has changed, before I deploy it using docker_stack and a compose file. Here is the code:
- name: Template configuration files to destination
template:
...
loop:
...
register: template_files_result
- name: Get docker container name
shell: "docker ps -f 'name=some_name' -q"
register: container_id
- name: Remove container
docker_container:
name: container_id.stdout
force_kill: yes
state: absent
vars:
files_changed: "{{ template_files_result | json_query('results[*].changed') }}"
when: container_id.stdout and files_changed is any
- name: Deploy
docker_stack:
state: present
name: stack_name
compose:
- "compose.yml"
with_registry_auth: true
However, the Remove container task never does anything and I can't figure out why.
What am I missing?

ansible jenkins_plugins mdule : error when playing again a playbook : Jenkins Home not found

I have a problem with the jenkins_plugins module.
Within a playbook that pull a jenkins docker image (jenkins/jenkins:lts-alpine) and runs it to install the instance and configure it, I have a task that installs a list of plugins on an instance, which is :
- name: Install plugins
jenkins_plugin:
owner: "{{ jenkins_process_user }}"
group: "{{ jenkins_process_group }}"
name: "{{ item }}"
state: latest
timeout: 120
url: "http://{{ jenkins_hostname }}:{{ jenkins_http_port }}{{ jenkins_url_prefix }}"
url_username: "{{ jenkins_admin_name }}"
url_password: "{{ jenkins_admin_password }}"
with_dependencies: yes
loop: "{{ jenkinsPlugins }}"
register: pluginResult
until: not pluginResult.failed
retries: 5
notify: restart_image
become: True
become_user: "{{ jenkins_process_user }}"
It works correctly when the playbook is run for the first time.
All plugins are installed, and possibly retried in case of problem.
But, when I relaunch exactly the same playbook, Each and every plugin installation is retried up to the max nbr of retry and fails with (for example):
failed: [devEnv] (item=blueocean) => {"ansible_loop_var": "item", "attempts": 5, "changed": false, "item": "blueocean", "msg": "Jenkins home directory doesn't exist."}
For sure, I have verified that the jenkins home directory actually exists and has the awaited "{{ jenkins_process_user }}" and
"{{ jenkins_process_group }}" owner and group, which is jenkins:jenkins.
Note that the docker container is bound to a local directory which belongs to jenkins:jenkins. To be sure uid and gid are the same on the local machine (a VM created with vagrant) and on the container, the uid:gid are forced to 1001:1001 when starting the container.
I also have checked that it actually the case.
I really cannot explain why I get this error, which clearly makes this playbook not idempotent !
I know that there is a way to install plugins via a shell script provided by Jenkins, but I'd like to stick with ansible playbook as far as possible.
For sure, I can give the whole playbook if you need additional information.
Thanks for your help.
J-L
Ok, I understand the problem.
Reading again the jenkins_plugins documentation, and looking at the jenkins_plugins module code, I found that installation and check already installed plugin version do not run the same code (two different alternatives of a test).
And the second one needs **JENKINS_HOME** to be defined, which is optional (defaults to /var/lib/jenkins) on the module parameters. I did not set it.
Well, it is actually /var/lib/jenkins on the container, but not on the docker controler machine, which is the ansible playbook target where it is /home/jenkins/jenkins_home.
So... This question is closed, unless someone has an additional information to give. You're welcome !
Best Regards.

Resources