Cannot execute ansible playbook via docker container - jenkins

Im executing a pipeline on jenkins that is inside a docker container. This pipeline calls another docker-compose file that executes an ansible playbook. The service that executes the playbook is called agent, and is defined as follows:
agent:
image: pjestrada/ansible
links:
- db
environment:
PROBE_HOST: "db"
PROBE_PORT: "3306"
command: ["probe.yml"]
this is the images it uses:
FROM ubuntu:trusty
MAINTAINER Pablo Estrada <pjestradac#gmail.com>
# Prevent dpkg errors
ENV TERM=x-term-256color
RUN sed -i "s/http:\/\/archive./http:\/\/nz.archive./g" /etc/apt/sources.list
#Install ansible
RUN apt-get update -qy && \
apt-get install -qy software-properties-common && \
apt-add-repository -y ppa:ansible/ansible && \
apt-get update -qy && \
apt-get install -qy ansible
# Copy baked in playbooks
COPY ansible /ansible
# Add voulme for Ansible Playbooks
Volume /ansible
WORKDIR /ansible
RUN chmod +x /
#Entrypoint
ENTRYPOINT ["ansible-playbook"]
CMD ["site.yml"]
My local machine is Ubuntu 16.04, and when I run docker-compose up agent the plabook is executed successfully. However when Im inside the jenkins container im getting this error on the same command call.
Attaching to todobackend9dev_agent_1
[36magent_1 | [0mERROR! the playbook: site.yml does not appear to be a file
This are the images and compose files for my jenkins container:
FROM jenkins:1.642.1
MAINTAINER Pablo Estrada <pjestradac#gmail.com>
# Suppress apt installation warnings
ENV DEBIAN_FRONTEND=noninteractive
# Change to root user
USER root
# Used to set the docker group ID
# Set to 497 by default, which is the group ID used by AWS Linux ECS Instance
ARG DOCKER_GID=497
# Create Docker Group with GID
# Set default value of 497 if DOCKER_GID set to blank string by Docker Compose
RUN groupadd -g ${DOCKER_GID:-497} docker
# Used to control Docker and Docker Compose versions installed
# NOTE: As of February 2016, AWS Linux ECS only supports Docker 1.9.1
ARG DOCKER_ENGINE=1.10.2
ARG DOCKER_COMPOSE=1.6.2
# Install base packages
RUN apt-get update -y && \
apt-get install apt-transport-https curl python-dev python-setuptools gcc make libssl-dev -y && \
easy_install pip
# Install Docker Engine
RUN apt-key adv --keyserver hkp://pgp.mit.edu:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D && \
echo "deb https://apt.dockerproject.org/repo ubuntu-trusty main" | tee /etc/apt/sources.list.d/docker.list && \
apt-get update -y && \
apt-get purge lxc-docker* -y && \
apt-get install docker-engine=${DOCKER_ENGINE:-1.10.2}-0~trusty -y && \
usermod -aG docker jenkins && \
usermod -aG users jenkins
# Install Docker Compose
RUN pip install docker-compose==${DOCKER_COMPOSE:-1.6.2} && \
pip install ansible boto boto3
# Change to jenkins user
USER jenkins
# Add Jenkins plugins
COPY plugins.txt /usr/share/jenkins/plugins.txt
RUN /usr/local/bin/plugins.sh /usr/share/jenkins/plugins.txt
Compose File:
version: '2'
volumes:
jenkins_home:
external: true
services:
jenkins:
build:
context: .
args:
DOCKER_GID: ${DOCKER_GID}
DOCKER_ENGINE: ${DOCKER_ENGINE}
DOCKER_COMPOSE: ${DOCKER_COMPOSE}
volumes:
- jenkins_home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "8080:8080"
I put a volume in order to access docker socket from my jenkins container. However, for some reason Im not being able to access the site.yml file I need for the playbook even though outside the container the file is available.
Can anyone help me solve this issue?

How sure are you about that volume mount point and your paths?
- jenkins_home:/var/jenkins_home
Have you tried debug via echo? If it can't find the site.yml then paths are the most likely cause. You can use jenkins replay on a job to iterate quickly and modify parts of the jenkins code. That will let you run things like
sh "pwd; ls -la"
I recommend adding the equivalent within your docker container so you can check the paths. My guess is that the workspace isn't where you think it is and you'll want to run docker with:
-v${env.WORKSPACE}:jenkins-workspace
and then within the container:
pushd /jenkins-worspace

Related

Building a Dockerfile from inside Docker Compose

So I'm trying to follow these instructions:
https://github.com/open-forest/sendy
I'm using Portainer and trying to run a Sendy container (newsletter software). Instead of running a MySQL image with it, I'm just using my external managed database instead.
On my server I keep project data at: /var/docker/project-name. I use this structure for bind mounting if I need to bring data into the containers from the start.
So for this project in the project-name folder I have sendy-6.0.2.zip and this Dockerfile: (This file was provide via the instructions on the above link)
#
# Docker with Sendy Email Campaign Marketing
#
# Build:
# $ docker build -t sendy:latest --target sendy -f ./Dockerfile .
#
# Build w/ XDEBUG installed
# $ docker build -t sendy:debug-latest --target debug -f ./Dockerfile .
#
# Run:
# $ docker run --rm -d --env-file sendy.env sendy:latest
FROM php:7.4.8-apache as sendy
ARG SENDY_VER=6.0.2
ARG ARTIFACT_DIR=6.0.2
ENV SENDY_VERSION ${SENDY_VER}
RUN apt -qq update && apt -qq upgrade -y \
# Install unzip cron
&& apt -qq install -y unzip cron \
# Install php extension gettext
# Install php extension mysqli
&& docker-php-ext-install calendar gettext mysqli \
# Remove unused packages
&& apt autoremove -y
# Copy artifacts
COPY ./artifacts/${ARTIFACT_DIR}/ /tmp
# Install Sendy
RUN unzip /tmp/sendy-${SENDY_VER}.zip -d /tmp \
&& cp -r /tmp/includes/* /tmp/sendy/includes \
&& mkdir -p /tmp/sendy/uploads/csvs \
&& chmod -R 777 /tmp/sendy/uploads \
&& rm -rf /var/www/html \
&& mv /tmp/sendy /var/www/html \
&& chown -R www-data:www-data /var/www \
&& mv /usr/local/etc/php/php.ini-production /usr/local/etc/php/php.ini \
&& rm -rf /tmp/* \
&& echo "\nServerName \${SENDY_FQDN}" > /etc/apache2/conf-available/serverName.conf \
# Ensure X-Powered-By is always removed regardless of php.ini or other settings.
&& printf "\n\n# Ensure X-Powered-By is always removed regardless of php.ini or other settings.\n\
Header always unset \"X-Powered-By\"\n\
Header unset \"X-Powered-By\"\n" >> /var/www/html/.htaccess \
&& printf "[PHP]\nerror_reporting = E_ALL & ~E_NOTICE & ~E_STRICT & ~E_DEPRECATED\n" > /usr/local/etc/php/conf.d/error_reporting.ini
# Apache config
RUN a2enconf serverName
# Apache modules
RUN a2enmod rewrite headers
# Copy hello-cron file to the cron.d directory
COPY cron /etc/cron.d/cron
# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/cron \
# Apply cron job
&& crontab /etc/cron.d/cron \
# Create the log file to be able to run tail
&& touch /var/log/cron.log
COPY artifacts/docker-entrypoint.sh /usr/local/bin/
ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"]
CMD ["apache2-foreground"]
#######################
# XDEBUG Installation
#######################
FROM sendy as debug
# Install xdebug extension
RUN pecl channel-update pecl.php.net \
&& pecl install xdebug \
&& docker-php-ext-enable xdebug \
&& rm -rf /tmp/pear
Here is my Docker Compose file:
version: '3.7'
services:
project-sendy:
container_name: project-sendy
image: sendy:6.0.2
build:
dockerfile: var/docker/project-sendy/Dockerfile
restart: unless-stopped
networks:
- proxy
- default
labels:
- "traefik.enable=true"
- "traefik.docker.network=proxy"
- "traefik.http.routers.project-secure.entrypoints=websecure"
- "traefik.http.routers.project-secure.rule=Host(`project.com`)"
environment:
SENDY_PROTOCOL: https
SENDY_FQDN: project.com
MYSQL_HOST: db-host-name-here
MYSQL_DATABASE: db-name-here
MYSQL_USER: db-user-name-here
MYSQL_PASSWORD: db-password-here
SENDY_DB_PORT: db-port-here
networks:
proxy:
external: true
When I try to deploy I get:
failed to deploy a stack: project-sendy Pulling project-sendy
Error could not find /data/compose/126/var/docker/project-sendy:
stat /data/compose/126/var/docker/project-sendy: no such file or directory
So here's what I've done.
I have the cron and artifacts folder on the same directory as the Dockerfile.
In the Dockerfile look for this line:
COPY artifacts/docker-entrypoint.sh /usr/local/bin/
Right below it put this line:
RUN chmod +x /usr/local/bin/docker-entrypoint.sh
Otherwise you will get this error:
Starting Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "/usr/local/bin/docker-entrypoint.sh": permission denied: unknown
Then build it with:
docker build -t sendy:6.0.2 .
Then your image will show up in portainer.
You can then remove the build section in your docker compose file and hit deploy. It now works for me.

Create database from one docker container in another

I have a program that builds servers automatically whenever we want stakeholders to test a new feature.
Currently I have the following setup:
Container 1 - all (contains nodejs, php and other dependencies)
Container 2 - db (contains the mysql database)
I'm aware that container 1 should be split but this will involve more unnecessary complexity to this stage of development.
Whenever a new feature is completed and ready to be deployed to a stage server we run: yarn run create:server --branchName=new-feature. This will create all of the configuration necessary to bring up our newly created server.
My problem is that whenever I run the command above I need to create a database in db container from all container:
mysql -u root -pxxxx -e "CREATE DATABASE IF NOT EXISTS `xxxx`"
The script main.ts is running in the context of all container, so it is necessary for all to communicate with db.
export const createDatabase = (subdomain: string) => {
const username = process.env.DB_USERNAME;
const password = process.env.DB_PASSWORD;
console.log(`[INFO] Creating database with name \`${subdomain}\``);
// triple back slash is necessary to avoid `command substitution` in some shells
if (isLocalEnviroment()) {
execSync(`docker run -it stage-manager-db mysql -u ${username} -p${password} -e "CREATE DATABASE IF NOT EXISTS \\\`${subdomain}\\\`"`)
} else {
execSync(`mysql -u ${username} -p${password} -e "CREATE DATABASE IF NOT EXISTS \\\`${subdomain}\\\`"`)
}
console.log(`[INFO] Database \`${subdomain}\` created successfully`);
}
On local environment we would like to use docker, while in production everything will sit in the same machine (db, frontendapp and api).
When trying to run the following command docker run -it stage-manager-db mysql -u root -ppassword -e "CREATE DATABASE IF NOT EXISTS master" from all I get
docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.
I have tried restarting the service with:
service docker restart
which gives
[ ok ] Starting Docker: docker.
but trying to communicate with db from all keeps getting the same error. Upon trying to service docker stop I get:
[....] Stopping Docker: dockerstart-stop-daemon: warning: failed to kill 825: No such process
No process in pidfile '/var/run/docker-ssd.pid' found running; none killed.
failed!
From now on I have tried the several links to fix this issue:
https://github.com/docker/for-linux/issues/52#issuecomment-333563492
https://askubuntu.com/questions/1146634/how-to-remove-docker-from-windows-subsystem
Cannot connect to the Docker daemon at unix:///var/run/docker.sock
Cant uninstall Docker from Ubuntu on WSL
How can I communicate from all container to db container?
Dockerfile
FROM php:7.4-fpm
# Install dependencies
RUN apt-get update && apt-get install -y \
build-essential \
libpng-dev \
libjpeg62-turbo-dev \
libfreetype6-dev \
locales \
zip \
jpegoptim optipng pngquant gifsicle \
vim \
unzip \
git \
curl \
libzip-dev \
libfontconfig1 \
libxrender1 \
libpng-dev \
make \
nginx \
apt-transport-https \
gnupg2 \
wget \
procps \
docker.io
# Install nodejs
RUN apt -y install curl dirmngr apt-transport-https lsb-release ca-certificates
RUN curl -sL https://deb.nodesource.com/setup_12.x | bash -
RUN apt -y install nodejs
# Install extensions
RUN docker-php-ext-install pdo_mysql exif zip pcntl gd
RUN docker-php-ext-configure gd --with-freetype --with-jpeg
RUN docker-php-ext-install -j$(nproc) gd
# Install composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y git
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Install Yarn
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
RUN apt update && apt install yarn
# Install dependencies for this project
RUN yarn global add ts-node typescript
RUN useradd -m forge
# Copy existing application directory contents
COPY . /var/www
# Copy existing application directory permissions
COPY --chown=forge:forge . /var/forge
# Copy ssh keys
COPY ./config/ssh /home/forge/.ssh/
# Give right permissions to `ssh` keys
RUN chmod 600 /home/forge/.ssh/config
RUN chmod 600 /home/forge/.ssh/back_end_deploy_key
RUN chmod 600 /home/forge/.ssh/frontend_deploy_key
RUN chmod 644 /home/forge/.ssh/back_end_deploy_key.pub
RUN chmod 644 /home/forge/.ssh/frontend_deploy_key.pub
RUN chown forge:forge /home/forge/.ssh/*
# Up Docker
RUN service docker start
RUN usermod -aG docker forge
# Create folder for stage servers
RUN mkdir -p /var/www/stage-servers
# Give correct permissions to `stage-servers` folder
RUN chown forge:www-data /var/www/stage-servers
RUN chmod g+s /var/www/stage-servers
RUN chmod o-rwx /var/www/stage-servers
# Change current user to forge
USER forge
# Expose port 9000 and start php-fpm server
EXPOSE 9000
CMD ["php-fpm"]
docker-composer.yml
version: '3.7'
services:
all:
working_dir: /var/www/stage-manager
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:80"
volumes:
- "./:/var/www/stage-manager"
- "./config/ssh:/root/.ssh"
networks:
- main
#MySQL Service
db:
image: mysql:5.7.22
container_name: stage-manager-db
restart: unless-stopped
tty: true
ports:
- "3306:3306"
environment:
MYSQL_DATABASE: whatever
MYSQL_ROOT_PASSWORD: password
SERVICE_TAGS: dev
SERVICE_NAME: mysql
volumes:
- dbdata:/var/lib/mysql/
networks:
- main
volumes:
project:
driver: local
driver_opts:
type: none
device: $PWD/
o: bind
dbdata:
driver: local
networks:
main:
I'm fairly new to docker so any approach that I might be doing wrong, please let me know. I have a feeling that this could be done much better so feel free to suggest improvements.
Update
** DO NOT DO THIS **
Instead of deleting this answer I will leave it here so others can see that this is not a secure/valid solution to this problem
By David Maze's comment:
Remember that anyone who can access the Docker socket has unrestricted root-level access over the whole host system. I would not add the Docker socket in casually here.
I was able to make it working by sharing the socket between my host OS and the all container.
docker-compose.yml
all:
working_dir: /var/www/stage-manager
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:80"
volumes:
- "./:/var/www/stage-manager"
- "./config/ssh:/root/.ssh"
- "/var/run/docker.sock:/var/run/docker.sock" -> important part
networks:
- main

Give permission to jenkins to access unix:///var/run/docker.sock

I have installed the docker plugin into jenkins and I am trying to configure a docker cloud.
My jenkins installation is running inside a docker container and I have bound to the docker socket on the host like so:
version: '3.3'
services:
jenkins:
container_name: jenkins
ports:
- '7345:8080'
- '50000:50000'
volumes:
- /docker/jenkins/data/jenkins_home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
image: 'jenkins/jenkins:lts'
This method works fine using docker-ce-cli. If I install the cli and bind to the socket of host then it works.
However setting up jenkins I am getting an error:
Inside the jenkins container everything is run under user "jenkins" with a UID of 1000. On my host, UID 1000 is a user called "ubuntu".
I have added this user to the docker group
usermod -aG docker ubuntu
And checked the socket permissions:
# ls -lisa /var/run/docker.sock
833 0 srw-rw---- 1 root docker 0 Jul 22 22:02 /var/run/docker.sock
But jenkins still complains it doesn't have permissions.
What is right way to give jenkins permissions to access this socket?
None of the customizations in the other thread worked but I tweaked it a bit and got it working with the below file:
FROM jenkins/jenkins
USER 0
ARG DOCKERGID=998
# Docker
RUN apt-get update \
&& apt-get install software-properties-common apt-transport-https ca-certificates gnupg-agent dialog apt-utils -y \
&& curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add - \
&& add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/debian \
$(lsb_release -cs) \
stable" \
&& apt-get update \
&& apt-get install docker-ce-cli -y
# Setup users and groups
RUN addgroup --gid ${DOCKERGID} docker
RUN usermod -aG docker jenkins
USER 1000
To be able to use docker from jenkins - just add jenkins user to docker group, not ubuntu one.
usermod -aG docker jenkins

How to combine Dockerfiles in gitlab ci?

I have this gitlab-ci.yml to build my SpringBoot app:
image: maven:latest
variables:
MAVEN_CLI_OPTS: "-s .m2/settings.xml --batch-mode"
MAVEN_OPTS: "-Dmaven.repo.local=.m2/repository"
cache:
paths:
- .m2/repository/
- target/
build:
stage: build
script:
- mvn $MAVEN_CLI_OPTS clean compile
only:
- /^release.*/
test:
stage: test
script:
- mvn $MAVEN_CLI_OPTS test
- "cat target/site/coverage/jacoco-ut/index.html"
only:
- /^release.*/
Now, i need to run another JOB on the Test Stage: Integration Tests. My app runs the integration tests on Headless Chrome with an in memory database, all i need to do on windows is: mvn integration-test
I've found a Dockerfile that has the Headless Chrome ready, so i need to combine the maven:latest image with this new image https://hub.docker.com/r/justinribeiro/chrome-headless/
How can i do that?
You can write a new docker file by choosing maven:latest as the base image. (That means all the maven latest image dependencies are there). You can refer this link to how to write a docker file.
Since the base image of the maven:latest is a debian image and docker file that contains Dockerfile that has the Headless Chrome is also a debian image so all the OS commands are same. So you can write a docker file like following where the base image is maven:latest and rest is same as here.
FROM maven:latest
LABEL name="chrome-headless" \
maintainer="Justin Ribeiro <justin#justinribeiro.com>" \
version="2.0" \
description="Google Chrome Headless in a container"
# Install deps + add Chrome Stable + purge all the things
RUN apt-get update && apt-get install -y \
apt-transport-https \
ca-certificates \
curl \
gnupg \
--no-install-recommends \
&& curl -sSL https://dl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& echo "deb https://dl.google.com/linux/chrome/deb/ stable main" > /etc/apt/sources.list.d/google-chrome.list \
&& apt-get update && apt-get install -y \
google-chrome-beta \
fontconfig \
fonts-ipafont-gothic \
fonts-wqy-zenhei \
fonts-thai-tlwg \
fonts-kacst \
fonts-symbola \
fonts-noto \
ttf-freefont \
--no-install-recommends \
&& apt-get purge --auto-remove -y curl gnupg \
&& rm -rf /var/lib/apt/lists/*
# Add Chrome as a user
RUN groupadd -r chrome && useradd -r -g chrome -G audio,video chrome \
&& mkdir -p /home/chrome && chown -R chrome:chrome /home/chrome \
&& mkdir -p /opt/google/chrome-beta && chown -R chrome:chrome /opt/google/chrome-beta
# Run Chrome non-privileged
USER chrome
# Expose port 9222
EXPOSE 9222
# Autorun chrome headless with no GPU
ENTRYPOINT [ "google-chrome" ]
CMD [ "--headless", "--disable-gpu", "--remote-debugging-address=0.0.0.0", "--remote-debugging-port=9222" ]
I have checked this and it's working fine. Once you have write the Dockerfile you can build it using dokcer build . from the same repository as Dockerfile. Then you can either push this to docker hub or your own registry where your gitlab runner can access the docker image. Make sure you tag the docker image of your preference as example let's think the tag is and you are pushing to your local repository {your-docker-repo}/maven-with-chrome-headless:1.0.0
Then use that previous tag in your gitlab-ci.yml file as image: {your-docker-repo}/maven-with-chrome-headless:1.0.0
You do not "combine" docker containers. You put different services into different containers and run them all together. Look at kubernetes (it has now generic support in gitlab) or choose simpler solution like docker-compose or docker-swarm.
For integration tests we use docker-compose.
Anyway, if using docker-compose, you will probably fall into the situation that you need so-called docker-in-docker. It depends on the type of worker, you use to run your gitlab jobs. If you use shell executor, everything will be fine. If you are using docker executor, you will have to setup it properly, because you cant call docker from docker without additional manual setup.
If using several containers is not your choice and you definitely want to put all in one container, the recommended way is to use supervisor to launch processes inside container. One of the options is supervisord: http://supervisord.org/

How to map the host OS file to the container at the time of container build process

docker-compose.yml:
version: '3'
services:
ezmove:
volumes:
- /host-dir:/home/container-dir
build:
context: .
args:
BRANCH: develop
Dockerfile:
FROM appcontainers/ubuntu:xenial
MAINTAINER user <user>
RUN apt-get update -y --no-install-recommends \
&& apt-get install -y --no-install-recommends python3.5-minimal python3.5-venv \
&& apt-get install -y --no-install-recommends git \
&& apt-get install -y --no-install-recommends python-pip \
&& pip install --upgrade pip \
&& rm -rf /var/lib/apt/lists/* \
&& mkdir -p /home/container-dir
WORKDIR /home/container-dir
RUN /bin/bash - c "sh ./script.sh"
At the time of build the docker container How to map local directory to container
When $ docker-compose up, it will starts to build container but after installation of the packag dependancies it will try to execute the script.sh file but got error "FILE NOT FOUND! "
Tried:
Not want todo git clone inside docker continer
Not want to store source code inside the continer
So, how to map the host OS file to the container at build time
you lack some COPY or ADD in your Dockerfile in order to copy your script.sh in your image.
Check the docs
https://docs.docker.com/engine/reference/builder/#add
https://docs.docker.com/engine/reference/builder/#copy
By the way, Docker is about isolation, so a running container should be isolated from the host, and certainly not access the host OS.

Resources