docker-compose unlink network from child containers when stopping parent containers? - docker

This is a continuation of my journey of creating multiple docker projects dynamically. I did not mention previously, to make this process dynamica as I want devs to specify what project they want to use, I'm using ansible to up local env.
Logic is:
running ansible-playbook run.yml -e "{projectsList:
['app-admin']}" - providing list of projects I want to start
stop existing main containers (in case they are running from the previous time)
Start the main containers
Depend on the provided list of projects run role tasks () I have a separate role for each supported project
stop the existing child project containers (in case they are running from the previous time)
start the child project containers
make some configuration depend on the role
And here is the issue (again) with the network, when I stop the main containers it's failing with a message:
error while removing network: network appnetwork has active endpoints
it makes sense as child docker containers use the same network, but I do not see so far way to change ordering of tasks as I'm using the roles, so main docker tasks always running before role-specific tasks.
main ansible file:
---
#- import_playbook: './services/old.yml'
- hosts: localhost
gather_facts: true
vars:
# add list of all supported projects, THIS SHOULD BE UPDATED FOREACH NEW PROJECT!
supportedProjects: ['all', 'app-admin', 'app-landing']
vars_prompt:
- name: "ansible_become_pass"
prompt: "Sudo password"
private: yes
pre_tasks:
# List of projects should be provided
- fail: msg="List of projects you want to run playbook for not provided"
when: (projectsList is not defined) or (projectsList|length == 0)
# Remove unsupported projects from list
- name: Filter out not supported projects
set_fact:
filteredProjectsList: "{{ projectsList | intersect(supportedProjects) }}"
# Check if any of projects exist after filtering
- fail: msg="All project you provided not supported. Supported projects {{ supportedProjects }}"
when: filteredProjectsList|length == 0
# Always stop existing docker containers
- name: stop existing common app docker containers
docker_compose:
project_src: ../docker/common/
state: absent
- name: start common app docker containers like nginx proxy, redic, mailcatcher etc. (this can take a while if running by the first time)
docker_compose:
project_src: ../docker/common/
state: present
build: no
nocache: no
- name: Get www-data id
command: docker exec app-php id -u www-data
register: wwwid
- name: Get current user group id
command: id -g
register: userid
- name: Register user and www-data ids
set_fact:
userid: "{{userid.stdout}}"
wwwdataid: "{{wwwid.stdout}}"
roles:
- { role: app-landing, when: '"app-landing" in filteredProjectsList or "all" in filteredProjectsList' }
- { role: app-admin, when: ("app-admin" in filteredProjectsList) or ("all" in filteredProjectsList) }
and role example app-admin/tasks/mian.yml:
---
- name: Sync {{name}} with git (can take while to clone repo by the first time)
git:
repo: "{{gitPath}}"
dest: "{{destinationPath}}"
version: "{{branch}}"
- name: stop existing {{name}} docker containers
docker_compose:
project_src: "{{dockerComposeFileDestination}}"
state: absent
- name: start {{name}} docker containers (this can take a while if running by the first time)
docker_compose:
project_src: "{{dockerComposeFileDestination}}"
state: present
build: no
nocache: no
- name: Copy {{name}} env file
copy:
src: development.env
dest: "{{destinationPath}}.env"
force: no
- name: Set file permissions for local {{name}} project files
command: chmod -R ug+w {{projectPath}}
become: yes
- name: Set execute permissions for local {{name}} bin folder
command: chmod -R +x {{projectPath}}/bin
become: yes
- name: Set user/group for {{name}} to {{wwwdataid}}:{{userid}}
command: chown -R {{wwwdataid}}:{{userid}} {{projectPath}}
become: yes
- name: Composer install for {{name}}
command: docker-compose -f {{mainDockerComposeFileDestination}}docker-compose.yml exec -T app-php sh -c "cd {{containerProjectPath}} && composer install"
Maybe there is a way to somehow unlink the network if the main container stop. I thought when a child container network set like external:
networks:
appnetwork:
external: true
solves the issue, but it's not.

A quick experiment with an external network:
dc1/dc1.yml
version: "3.0"
services:
nginx:
image: nginx
ports:
- "8080:80"
networks:
- an0
networks:
an0:
external: true
dc2/dc2.yml
version: "3.0"
services:
redis:
image: redis
ports:
- "6379:6379"
networks:
- an0
networks:
an0:
external: true
Starting and stopping:
$ docker network create -d bridge an0
1e07251e32b0d3248b6e70aa70a0e0d0a94e457741ef553ca5f100f5cec4dea3
$ docker-compose -f dc1/dc1.yml up -d
Creating dc1_nginx_1 ... done
$ docker-compose -f dc2/dc2.yml up -d
Creating dc2_redis_1 ... done
$ docker-compose -f dc1/dc1.yml down
Stopping dc1_nginx_1 ... done
Removing dc1_nginx_1 ... done
Network an0 is external, skipping
$ docker-compose -f dc2/dc2.yml down
Stopping dc2_redis_1 ... done
Removing dc2_redis_1 ... done
Network an0 is external, skipping

Related

MySQL Container Wait Timeout

When trying to wait for the mysql docker container, I'm met with: Problem with dial: dial tcp 127.0.0.1:3306: connect: connection refused. Sleeping 1s
# This config is equivalent to both the '.circleci/extended/orb-free.yml' and the base '.circleci/config.yml'
version: 2.1
# Orbs are reusable packages of CircleCI configuration that you may share across projects, enabling you to create encapsulated, parameterized commands, jobs, and executors that can be used across multiple projects.
# See: https://circleci.com/docs/2.0/orb-intro/
orbs:
node: circleci/node#5.0.1
# Invoke jobs via workflows
# See: https://circleci.com/docs/2.0/configuration-reference/#workflows
workflows:
version: 2
node: # This is the name of the workflow, feel free to change it to better match your workflow.
# Inside the workflow, you define the jobs you want to run.
jobs:
- build_and_test:
# This is the node version to use for the `cimg/node` tag
# Relevant tags can be found on the CircleCI Developer Hub
# https://circleci.com/developer/images/image/cimg/node
# If you are using yarn, change the line below from "npm" to "yarn"
filters:
branches:
only:
- master
executors:
node:
docker:
- image: cimg/node:16.14.2
jobs:
build_and_test:
executor: node
docker:
- image: cimg/mysql:8.0
auth:
username: myuser
password: $DOCKERHUB_PASSWORD
environment:
MYSQL_HOST: 127.0.0.1
MYSQL_DATABASE: mydatabase
MYSQL_USER: user
MYSQL_PASSWORD: passw0rd
steps:
- checkout
- run:
name: install dockerize
command: wget https://github.com/jwilder/dockerize/releases/download/v$DOCKERIZE_VERSION/dockerize-linux-amd64-v$DOCKERIZE_VERSION.tar.gz && tar -C /usr/local/bin -xzvf dockerize-linux-amd64-v$DOCKERIZE_VERSION.tar.gz && rm dockerize-linux-amd64-v$DOCKERIZE_VERSION.tar.gz
- run:
name: Wait for db
command: dockerize -wait tcp://127.0.0.1:3306 -timeout 10s
I do see that the container is installed under the spin-up environment step, so I believe it should be running:
Starting container cimg/mysql:8.0
cimg/mysql:8.0:
using image cimg/mysql#sha256:76f5b1dbd079f2fef5fe000a5c9f15f61df8747f28c24ad93bb42f8ec017a8df
pull stats: Image was already available so the image was not pulled
time to create container: 21ms
image is cached as cimg/mysql:8.0, but refreshing...
8.0: Pulling from cimg/mysql
Digest: sha256:76f5b1dbd079f2fef5fe000a5c9f15f61df8747f28c24ad93bb42f8ec017a8df
Status: Image is up to date for cimg/mysql:8.0
Time to upload agent and config: 369.899813ms
Time to start containers: 407.510271ms
However, nothing I've been able to look into has pointed me in the direction of coming up with a solution at this point.
look over here, your job should define like following, for test out sql container, you could just use nc -vz localhost 3306, but the sql docker take time to initialize, so wait for about 2 minutes before test that.
jobs:
build_and_test:
docker:
# Primary container image where all steps run.
- image: cimg/node:16.14.2
# Secondary container image on common network.
- image: cimg/mysql:8.0
auth:
username: myuser
password: $DOCKERHUB_PASSWORD
environment:
MYSQL_HOST: 127.0.0.1
MYSQL_DATABASE: mydatabase
MYSQL_USER: user
MYSQL_PASSWORD: passw0rd
steps:
- checkout
- run: sleep 120 && nc -vz localhost 3306

ERROR! docker container' is not a valid attribute for a Play

Iam just writing simple ansible playbook to run container getting error
This is my playbook code
---
- name: Create container
docker_container:
name: mydata
image: busybox
volumes:
- /data
Getting error like this.
ERROR! docker container' is not a valid attribute for a Play
Anybody help please.
You need to add some more lines to your playbook.
- name: Play name
hosts: your_hosts
tags: your tag
gather_facts: no|yes
tasks:
- name: Create container
docker_container:
name: mydata
image: busybox
volumes:
- /data
If you haven't yet, do:
ansible-galaxy collection install community.general where you are running the playbook (Ansible host).
Then, `ansible-playbook [your_playbook.yaml]``
Notice if you are using volume you may want to use docker_volume module to config it before the container starts. Also, try to map the volume, like - /data:/my/container/path so you can find it easier.

How to get container id from first docker-compose service inside second service?

I want to run filebeat as a sidecar container next to my main application container to collect application logs. I'm using docker-compose to start both services together, filebeat depending on the application container.
This is all working fine. I'm using a shared volume for the application logs.
However I would like to collect docker container logs (stdout JSON driver) as well in filebeat.
Filebeat provides a docker/container input module for this purpose. Here is my configuration. First part is to get the application logs. Second part should get docker logs:
filebeat.inputs:
- type: log
paths:
- /path/to/my/application/*.log.json
exclude_lines: ['DEBUG']
- type: docker
containers.ids: '*'
json.message_key: message
json.keys_under_root: true
json.add_error_key: true
json.overwrite_keys: true
tags: ["docker"]
What I don't like it the containers.ids: '*'. Here I would want to point filebeat to the direct application container, ignoring all others.
Since I don't know the container ID before I run docker-compose up starting both containers, I was wondering if there is a easy way to get the container ID from my application container in my filebeat container (via docker-comnpose?) to filter on this ID?
I think you may work around the problem:
first set all the logs from the contianer to a syslog:
driver: "syslog"
options:
syslog-address: "tcp://localhost:9000"
then configure filebeat to get the logs from that syslog server like this:
filebeat.inputs:
- type: syslog
protocol.udp:
host: "localhost:9000"
This is also not really answering the question, but should work as a solution as well.
The main idea is to use label within the filebeat autodiscovery filter.
Taken from this post: https://discuss.elastic.co/t/filebeat-autodiscovery-filtering-by-container-labels/120201/5
filebeat.yml
filebeat.autodiscover:
providers:
- type: docker
templates:
- condition:
contains:
docker.container.labels.somelabel: "somevalue"
config:
- type: docker
containers.ids:
- "${data.docker.container.id}"
output.console:
pretty: true
docker-compose.yml:
version: '3'
services:
filebeat:
image: docker.elastic.co/beats/filebeat:6.2.1
command: "--strict.perms=false -v -e -d autodiscover,docker"
user: root
volumes:
- ./filebeat.yml:/usr/share/filebeat/filebeat.yml
- /var/lib/docker/containers:/var/lib/docker/containers
- /var/run/docker.sock:/var/run/docker.sock
test:
image: alpine
command: "sh -c 'while true; do echo test; sleep 1; done'"
depends_on:
- filebeat
labels:
somelabel: "somevalue"

Unable to configure my Docker container using intermediate_instructions and pid_one_command in Test Kitchen

The following .kitchen.yml file fails to configure my docker container with the required tools mentioned in intermediate_instructions. The pid_one_command also does not work as the container still loads with the bash shell
Any ideas what is wrong with the file?
driver:
name: docker
socket: tcp://localhost:2375
binary: docker.exe
chef_version: latest
privileged: true
provisioner:
name: chef_zero
# You may wish to disable always updating cookbooks in CI or other testing environments.
# For example:
# always_update_cookbooks: <%= !ENV['CI'] %>
always_update_cookbooks: true
verifier:
name: inspec
platforms:
- name: ubuntu-16.04
driver:
image: ubuntu:16.04
pid_one_command: /bin/systemd
intermediate_instructions:
- RUN /usr/bin/apt-get install -y lsof which initscripts net-tools
suites:
- name: default
run_list:
- recipe[testy::default]
verifier:
inspec_tests:
- test/smoke/default
attributes:
I think you're trying to use config options for kitchen-dokken with kitchen-docker. The two projects are unrelated.

building docker container with startup scripts via ansible

I am trying to build docker container which should include startup scripts in container's /etc/my_init.d directory via ansible. I have difficulty finding any documentation how to do this. Here is relevant portion of my yaml file:
- name: Create container
docker:
name: myserver
image: "{{ docker_repo }}/myserver:{{ server.version }}"
state: started
restart_policy: always
docker_api_version: 1.18
registry: "{{ docker_repo }}"
username: "{{ registry_user }}"
password: "{{ registry_password }}"
links:
- "mywebservices"
ports:
- "8000:8000"
- "9899:9899"
volumes:
- "{{ myserver_home_dir }}/logs:/var/log/my_server"
env:
MY_ENVIRONMENT: "{{ my_environment }}"
when: myserver_action == "create"
or (myserver_action == "diff-create" and myserver.changed)
or myserver_action == "update"
What should I add in here to tell ansible to put my files into container's /etc/my_init.d during build?
First of all, you can't build container (you can start it), you build images.
Second, docker module is deprecated, use docker_image to build images.
You should copy your files into build directory (with copy or synchronize modules), for example:
/tmp/build
Then create Dockerfile that will take them from build directory and add into your image.
After that call docker_image:
docker_image:
path: /tmp/build
name: myimage
Finally start your container:
docker_container:
image: myimage
name: mycontainer
Unsure if it's relevant, as I don't know what your startup Ansible content is doing, but it's probably worth looking at the Ansible Container project.
https://github.com/ansible/ansible-container
You can build your container images using Ansible roles instead of a Dockerfile, orchestrate them locally, and deploy them to production Kubernetes or Red Hat OpenShift.

Resources