Ansible Playbook execution in Docker container skipping task completion - docker

I am trying to Dockerize a ansible playbook that runs a MySQL dump.
---
- name: MySQL Dump
hosts: lh
connection: local
vars_files:
- ./vars/sql-credentials.yml
- ./vars/aws-credentials.yml
tasks:
- name: Create Folder To Store
file:
state: directory
path: ./backups
- name: Get Dump from MySQL DB
mysql_db:
name: "{{ db_name }}"
config_file: .my.cnf
login_host: "{{ db_login_host }}"
login_user: "{{ db_login_user }}"
login_password: "{{ db_login_password }}"
state: dump
target: "backups/z-{{ ansible_date_time.date }}.sql.gz"
Executing the playbook outside of the container, I get the sql dump file that I am expecting (dump file is about 80mb). When the playbook runs inside the container, it does the playbook finishes early (almost immediately after task was listed) without any errors, the output of the file is 20kb.
FROM ubuntu:bionic
RUN apt-get update && \
apt-get dist-upgrade -y && \
apt-get install -y software-properties-common systemd && \
rm -rf /var/lib/apt/lists/*
RUN apt-add-repository -y ppa:ansible/ansible && apt-get update && apt-get install -y \
git \
ansible \
mysql-client \
python-pip \
python-dev \
build-essential \
&& rm -rf /var/lib/apt/lists/*
COPY . .
RUN pip install -r requirements.txt
RUN echo "[local]\nlocalhost ansible_connection=local" > /etc/ansible/hosts
CMD [ "ansible-playbook", "-i", "inventory", "playbook.yml", "-vvvv" ]
I've tried running it in verbose mode inside the container, and I'm not getting any errors at all.
The reason why I want to have this run in a container is because I want Travis CI to execute this dump in a CRON and then deploy it to a S3 bucket.

Related

Modprobe not installed on docker kubernetes container in GKE

I'm trying to run gcsfuse inside my docker container to access my Google Cloud Storage so my program inside the docker can access it. I'm using Google Kuberenetes Engine. My problem is when i run whereis modprobe I get no results, meaning there is no modprobe installed. I've seen this post and this one but they are futile. I've allready ran sudo apt install update && sudo apt install upgrade to upgrade my kernels and also tried simply sudo apt-get install modprobe which results in package not found. I've eddited my deployment.yaml file to include these (I'm deploying throught github actions):
spec:
...
securityContext:
privileged: true
capabilities:
add:
- SYS_ADMIN
- SYS_MODULE
env:
- name: STARTUP_SCRIPT
value: |
#! /bin/bash
modprobe fuse
But these didn't change anything at all. I've seen in a post that i must add something like lib/modules but i allready have a lib file inside my container that my program uses, is there a workaround for that? Am i installing gcsfuse wrong? (Installing gcsfuse was hard normal practices didn't work but in the end we made it work)
Here is my gcsfuse installation:
RUN apt-get update -y && apt-get dist-upgrade -y && apt-get -y install lsb-release curl gnupg && apt -y install lsb-core
ENV GCSFUSE_REPO gcsfuse-stretch
RUN apt-get update -y && apt-get install -y --no-install-recommends apt-transport-https ca-certificates curl gnupg
RUN echo "deb http://packages.cloud.google.com/apt $GCSFUSE_REPO main" | tee /etc/apt/sources.list.d/gcsfuse.list
RUN echo "deb https://packages.cloud.google.com/apt cloud-sdk main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list
RUN curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
# Install gcsfuse and google cloud sdk
RUN apt-get update -y && apt-get install -y gcsfuse google-cloud-sdk \
&& apt-get autoremove -y \
&& apt-get clean -y \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
This is a continuation of another error I had whilst trying to run gcsfuse, I realised i don't have modprobe and asked this question. Ubuntu image 22.10
Edit: This questions says I must add stuff to a yaml file but I'm not sure which yaml file I must add those to. Since im using github actions I have deployment.yaml, kustomization.yaml and service.yaml
When you provide the following as you job yaml file, it will provide the correct privileges for the job you are creating rather then deployments.yaml file :
apiVersion: batch/v1
kind: Job
metadata:
# Unique key of the Job instance
name: job-40
spec:
template:
metadata:
name: job-40
spec:
containers:
- name: nginx-1
image: gcr.io/videoo2/github.com/...
command: ["/bin/sleep"]
args: ["1000"]
securityContext:
privileged: true
capabilities:
add:
- SYS_ADMIN
# Do not restart containers after they exit
restartPolicy: Never
# of retries before marking as failed.
backoffLimit: 0

Ansible Molecule Test Error: cannot reload udev rules: exit status 1

I'm trying to set up Ansible molecule for testing roles on different OSes. For example, this role is failing when it gets to the task that installs with snap install core:
https://github.com/ProfessorManhattan/Ansible-Role-Snapd
molecule.yml:
---
dependency:
name: galaxy
options:
role-file: requirements.yml
requirements-file: requirements.yml
driver:
name: docker
platforms:
- name: Ubuntu-20.04
image: professormanhattan/ansible-molecule-ubuntu2004
command: /sbin/init
tmpfs:
- /run
- /tmp
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
privileged: true
pre_build_image: true
provisioner:
name: ansible
connection_options:
ansible_connection: docker
ansible_password: ansible
ansible_ssh_user: ansible
inventory:
group_vars:
all:
molecule_test: true
options:
vvv: true
playbooks:
converge: converge.yml
verifier:
name: ansible
scenario:
create_sequence:
- dependency
- create
- prepare
check_sequence:
- dependency
- cleanup
- destroy
- create
- prepare
- converge
- check
- destroy
converge_sequence:
- dependency
- create
- prepare
- converge
destroy_sequence:
- dependency
- cleanup
- destroy
test_sequence:
- lint
- dependency
- cleanup
- destroy
- syntax
- create
- prepare
- converge
- idempotence
- side_effect
- verify
- cleanup
- destroy
install-Debian.yml:
---
- name: Ensure snapd is installed
apt:
name: snapd
state: present
update_cache: true
- name: Ensure fuse filesystem is installed
apt:
name: fuse
state: present
- name: Ensure snap is started and enabled on boot
ansible.builtin.systemd:
enabled: true
name: snapd
state: started
- name: Ensure snap core is installed # This task is failing
community.general.snap:
name: core
state: present
The error I receive is:
TASK [professormanhattan.snapd : Ensure fuse filesystem is installed] **********
ok: [Ubuntu-20.04]
TASK [professormanhattan.snapd : Ensure snap is started and enabled on boot] ***
changed: [Ubuntu-20.04]
TASK [professormanhattan.snapd : Ensure snap core is installed] ****************
fatal: [Ubuntu-20.04]: FAILED! => {"changed": false, "channel": "stable", "classic": false, "cmd": "sh -c \"/usr/bin/snap install core\"", "msg": "Ooops! Snap installation failed while executing 'sh -c \"/usr/bin/snap install core\"', please examine logs and error output for more details.", "rc": 1, "stderr": "error: cannot perform the following tasks:\n- Setup snap \"core\" (10823) security profiles (cannot reload udev rules: exit status 1\nudev output:\nFailed to send reload request: No such file or directory\n)\n", "stderr_lines": ["error: cannot perform the following tasks:", "- Setup snap \"core\" (10823) security profiles (cannot reload udev rules: exit status 1", "udev output:", "Failed to send reload request: No such file or directory", ")"], "stdout": "", "stdout_lines": []}
The same is true for all the other operating systems I'm trying to test. Here's a link to the Dockerfile I'm using to build the Ubuntu image:
Dockerfile:
FROM ubuntu:20.04
LABEL maintainer="help#megabyte.space"
ENV container docker
ENV DEBIAN_FRONTEND noninteractive
# Source: https://github.com/ansible/molecule/issues/1104
RUN set -xe \
&& apt-get update \
&& apt-get install -y apt-utils \
&& apt-get upgrade -y \
&& apt-get install -y \
build-essential \
libyaml-dev \
python3-apt \
python3-dev \
python3-pip \
python3-setuptools \
python3-yaml \
software-properties-common \
sudo \
systemd \
systemd-sysv \
&& apt-get clean \
&& pip3 install \
ansible \
ansible-lint \
flake8 \
molecule \
yamllint \
&& mkdir -p /etc/ansible \
&& echo "[local]\nlocalhost ansible_connection=local" > /etc/ansible/hosts \
&& groupadd -r ansible \
&& useradd -m -g ansible ansible \
&& usermod -aG sudo ansible \
&& sed -i "/^%sudo/s/ALL\$/NOPASSWD:ALL/g" /etc/sudoers
VOLUME ["/sys/fs/cgroup", "/tmp", "/run"]
CMD ["/sbin/init"]
Looking for a geerlingguy.

Containerization of Node-Red failing: cannot find module 'express'

I am very new to Docker.
I am getting an error saying "cannot find module 'express'" while trying to containerize simple node-red application. The details are as follows:
Base machine
OS -Debian 9 (stretch) 64-bit
RAM -8 gb
GNOME - 3.22.2
Env - Oracle Virtual Box
Node-Red source
https://github.com/node-red/node-red.git
Docker Version
17.12.0-ce, build c97c6d6
docker-compose -v
1.20.1, build 5d8c71b
Docker File
FROM debian:stretch-slim
RUN useradd -c 'Node-Red user' -m -d /home/nodered -s /bin/bash nodered
RUN chown -R nodered.nodered /home/nodered
RUN echo "Acquire::http::Proxy \"http://xxxx:yyyy";" > /etc/apt/apt.conf.d/01turnkey \
&& echo "Acquire::https::Proxy \"http://xxxx.yyyy";" >> /etc/apt/apt.conf.d/01turnkey
ENV http_proxy="http://xxxx:yyyy \
https_proxy="http://xxxx:yyyy"
USER root
RUN apt-get update && apt-get -y install --no-install-recommends \
ca-certificates \
apt-utils \
curl \
sudo \
git \
python \
make \
g++ \
gnupg2
RUN mkdir -p /home/nodered/shaan-node-red && chown -R nodered.nodered /home/nodered/shaan-node-red
ENV HOME /home/nodered/shaan-node-red
WORKDIR /home/nodered/shaan-node-red
RUN ls -la
RUN env
USER root
RUN echo "nodered ALL=(root) NOPASSWD:ALL" > /etc/sudoers.d/nodered && \
chmod 0440 /etc/sudoers.d/nodered
RUN curl -sL https://deb.nodesource.com/setup_9.x | bash -
RUN apt-get -y install nodejs
RUN rm -rf node-v9.x
RUN node -v (v9.9.0) && npm -v (5.6.0)
RUN npm config set proxy "http://xxxx:yyyy" \
npm config set http-proxy "http://xxxx:yyyy"
COPY . /home/nodered/shaan-node-red
RUN cd /home/nodered/shaan-node-red && ls -la && npm install
RUN npm run build && ls -la
RUN cd /home/nodered/shaan-node-red/node_modules/ && git clone https://github.com/netsmarttech/node-red-contrib-s7.git && ls -la | grep s7 && cd ./node-red-contrib-s7 && npm install
RUN ls -la /home/nodered/shaan-node-red/node_modules
ENTRYPOINT ["sh","entrypoint.sh"]
entrypoint.sh
node /home/nodered/shaan-node-red/red.js
Docker-compose.yml
version: '2.0'
services:
web:
image: shaan-node-red
build: .
volumes:
- .:/home/nodered/shaan-node-red
ports:
- "1880:1880"
- "5858:5858"
network_mode: host
Building with command:
docker-compose up
Error description
Note
Not getting any error while building same node-red at the base machine.

Can not run 'varnishadm' inside docker container started with varnishd

I am running docker (via docker-compose) and can't run varnishadm from within the container. The error produced is:
Cannot open /var/lib/varnish/4f0dab1efca3/_.vsm: No such file or directory
Could not open shared memory
I have tried searching on the 'shared memory' issue and _.vsm with no luck. It seems that the _.vsm is not created at all and /var/lib/varnish/ inside the container is empty.
I have tried a variety of -T settings without any luck.
Why run varnishadm?
The root of why I need to run varnishadm is to reload varnish while saving the cache. My backup backup backup option is to set up varnish as a service. We are on an old version of Varnish for the time being.
How am I starting docker?
CMD varnishd -F -f /etc/varnish/varnish.vcl \
-s malloc,1G \
-a :80
Full Dockerfile
FROM ubuntu:12.04
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install wget dtrx varnish -y \
&& apt-get install pkg-config autoconf autoconf-archive automake libtool python-docutils libpcre3 libpcre3-dev xsltproc make -y \ && rm -rf /var/lib/apt/lists/*
RUN export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig/
RUN wget https://github.com/varnishcache/varnish-cache/archive/varnish-
3.0.2.tar.gz --no-check-certificate \
&& dtrx -n varnish-3.0.2.tar.gz
WORKDIR /varnish-3.0.2/varnish-cache-varnish-3.0.2/
RUN cd /varnish-3.0.2/varnish-cache-varnish-3.0.2/ && ./autogen.sh &&
cd /varnish-3.0.2/varnish-cache-varnish-3.0.2/ && ./configure && make install
RUN cd / && wget --no-check-certificate https://github.com/Dridi/libvmod-querystring/archive/v0.3.tar.gz && dtrx -n ./v0.3.tar.gz
WORKDIR /v0.3/libvmod-querystring-0.3
RUN ./autogen.sh && ./configure VARNISHSRC=/varnish-3.0.2/varnish-cache-varnish-3.0.2/ && make install
RUN cp /usr/local/lib/varnish/vmods/* /usr/lib/varnish/vmods/
WORKDIR /etc/varnish/
CMD varnishd -F -f /etc/varnish/varnish.vcl \
-s malloc,1G \
-a :80
EXPOSE 80
Full docker-compose
version: "3"
services:
varnish:
build: ./
ports:
- "8000:80"
volumes:
- ./default.vcl:/etc/varnish/varnish.vcl
- ./devicedetect.vcl:/etc/varnish/devicedetect.vcl
restart: unless-stopped

Cannot execute ansible playbook via docker container

Im executing a pipeline on jenkins that is inside a docker container. This pipeline calls another docker-compose file that executes an ansible playbook. The service that executes the playbook is called agent, and is defined as follows:
agent:
image: pjestrada/ansible
links:
- db
environment:
PROBE_HOST: "db"
PROBE_PORT: "3306"
command: ["probe.yml"]
this is the images it uses:
FROM ubuntu:trusty
MAINTAINER Pablo Estrada <pjestradac#gmail.com>
# Prevent dpkg errors
ENV TERM=x-term-256color
RUN sed -i "s/http:\/\/archive./http:\/\/nz.archive./g" /etc/apt/sources.list
#Install ansible
RUN apt-get update -qy && \
apt-get install -qy software-properties-common && \
apt-add-repository -y ppa:ansible/ansible && \
apt-get update -qy && \
apt-get install -qy ansible
# Copy baked in playbooks
COPY ansible /ansible
# Add voulme for Ansible Playbooks
Volume /ansible
WORKDIR /ansible
RUN chmod +x /
#Entrypoint
ENTRYPOINT ["ansible-playbook"]
CMD ["site.yml"]
My local machine is Ubuntu 16.04, and when I run docker-compose up agent the plabook is executed successfully. However when Im inside the jenkins container im getting this error on the same command call.
Attaching to todobackend9dev_agent_1
[36magent_1 | [0mERROR! the playbook: site.yml does not appear to be a file
This are the images and compose files for my jenkins container:
FROM jenkins:1.642.1
MAINTAINER Pablo Estrada <pjestradac#gmail.com>
# Suppress apt installation warnings
ENV DEBIAN_FRONTEND=noninteractive
# Change to root user
USER root
# Used to set the docker group ID
# Set to 497 by default, which is the group ID used by AWS Linux ECS Instance
ARG DOCKER_GID=497
# Create Docker Group with GID
# Set default value of 497 if DOCKER_GID set to blank string by Docker Compose
RUN groupadd -g ${DOCKER_GID:-497} docker
# Used to control Docker and Docker Compose versions installed
# NOTE: As of February 2016, AWS Linux ECS only supports Docker 1.9.1
ARG DOCKER_ENGINE=1.10.2
ARG DOCKER_COMPOSE=1.6.2
# Install base packages
RUN apt-get update -y && \
apt-get install apt-transport-https curl python-dev python-setuptools gcc make libssl-dev -y && \
easy_install pip
# Install Docker Engine
RUN apt-key adv --keyserver hkp://pgp.mit.edu:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D && \
echo "deb https://apt.dockerproject.org/repo ubuntu-trusty main" | tee /etc/apt/sources.list.d/docker.list && \
apt-get update -y && \
apt-get purge lxc-docker* -y && \
apt-get install docker-engine=${DOCKER_ENGINE:-1.10.2}-0~trusty -y && \
usermod -aG docker jenkins && \
usermod -aG users jenkins
# Install Docker Compose
RUN pip install docker-compose==${DOCKER_COMPOSE:-1.6.2} && \
pip install ansible boto boto3
# Change to jenkins user
USER jenkins
# Add Jenkins plugins
COPY plugins.txt /usr/share/jenkins/plugins.txt
RUN /usr/local/bin/plugins.sh /usr/share/jenkins/plugins.txt
Compose File:
version: '2'
volumes:
jenkins_home:
external: true
services:
jenkins:
build:
context: .
args:
DOCKER_GID: ${DOCKER_GID}
DOCKER_ENGINE: ${DOCKER_ENGINE}
DOCKER_COMPOSE: ${DOCKER_COMPOSE}
volumes:
- jenkins_home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "8080:8080"
I put a volume in order to access docker socket from my jenkins container. However, for some reason Im not being able to access the site.yml file I need for the playbook even though outside the container the file is available.
Can anyone help me solve this issue?
How sure are you about that volume mount point and your paths?
- jenkins_home:/var/jenkins_home
Have you tried debug via echo? If it can't find the site.yml then paths are the most likely cause. You can use jenkins replay on a job to iterate quickly and modify parts of the jenkins code. That will let you run things like
sh "pwd; ls -la"
I recommend adding the equivalent within your docker container so you can check the paths. My guess is that the workspace isn't where you think it is and you'll want to run docker with:
-v${env.WORKSPACE}:jenkins-workspace
and then within the container:
pushd /jenkins-worspace

Resources