I'm trying to set up Ansible molecule for testing roles on different OSes. For example, this role is failing when it gets to the task that installs with snap install core:
https://github.com/ProfessorManhattan/Ansible-Role-Snapd
molecule.yml:
---
dependency:
name: galaxy
options:
role-file: requirements.yml
requirements-file: requirements.yml
driver:
name: docker
platforms:
- name: Ubuntu-20.04
image: professormanhattan/ansible-molecule-ubuntu2004
command: /sbin/init
tmpfs:
- /run
- /tmp
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
privileged: true
pre_build_image: true
provisioner:
name: ansible
connection_options:
ansible_connection: docker
ansible_password: ansible
ansible_ssh_user: ansible
inventory:
group_vars:
all:
molecule_test: true
options:
vvv: true
playbooks:
converge: converge.yml
verifier:
name: ansible
scenario:
create_sequence:
- dependency
- create
- prepare
check_sequence:
- dependency
- cleanup
- destroy
- create
- prepare
- converge
- check
- destroy
converge_sequence:
- dependency
- create
- prepare
- converge
destroy_sequence:
- dependency
- cleanup
- destroy
test_sequence:
- lint
- dependency
- cleanup
- destroy
- syntax
- create
- prepare
- converge
- idempotence
- side_effect
- verify
- cleanup
- destroy
install-Debian.yml:
---
- name: Ensure snapd is installed
apt:
name: snapd
state: present
update_cache: true
- name: Ensure fuse filesystem is installed
apt:
name: fuse
state: present
- name: Ensure snap is started and enabled on boot
ansible.builtin.systemd:
enabled: true
name: snapd
state: started
- name: Ensure snap core is installed # This task is failing
community.general.snap:
name: core
state: present
The error I receive is:
TASK [professormanhattan.snapd : Ensure fuse filesystem is installed] **********
ok: [Ubuntu-20.04]
TASK [professormanhattan.snapd : Ensure snap is started and enabled on boot] ***
changed: [Ubuntu-20.04]
TASK [professormanhattan.snapd : Ensure snap core is installed] ****************
fatal: [Ubuntu-20.04]: FAILED! => {"changed": false, "channel": "stable", "classic": false, "cmd": "sh -c \"/usr/bin/snap install core\"", "msg": "Ooops! Snap installation failed while executing 'sh -c \"/usr/bin/snap install core\"', please examine logs and error output for more details.", "rc": 1, "stderr": "error: cannot perform the following tasks:\n- Setup snap \"core\" (10823) security profiles (cannot reload udev rules: exit status 1\nudev output:\nFailed to send reload request: No such file or directory\n)\n", "stderr_lines": ["error: cannot perform the following tasks:", "- Setup snap \"core\" (10823) security profiles (cannot reload udev rules: exit status 1", "udev output:", "Failed to send reload request: No such file or directory", ")"], "stdout": "", "stdout_lines": []}
The same is true for all the other operating systems I'm trying to test. Here's a link to the Dockerfile I'm using to build the Ubuntu image:
Dockerfile:
FROM ubuntu:20.04
LABEL maintainer="help#megabyte.space"
ENV container docker
ENV DEBIAN_FRONTEND noninteractive
# Source: https://github.com/ansible/molecule/issues/1104
RUN set -xe \
&& apt-get update \
&& apt-get install -y apt-utils \
&& apt-get upgrade -y \
&& apt-get install -y \
build-essential \
libyaml-dev \
python3-apt \
python3-dev \
python3-pip \
python3-setuptools \
python3-yaml \
software-properties-common \
sudo \
systemd \
systemd-sysv \
&& apt-get clean \
&& pip3 install \
ansible \
ansible-lint \
flake8 \
molecule \
yamllint \
&& mkdir -p /etc/ansible \
&& echo "[local]\nlocalhost ansible_connection=local" > /etc/ansible/hosts \
&& groupadd -r ansible \
&& useradd -m -g ansible ansible \
&& usermod -aG sudo ansible \
&& sed -i "/^%sudo/s/ALL\$/NOPASSWD:ALL/g" /etc/sudoers
VOLUME ["/sys/fs/cgroup", "/tmp", "/run"]
CMD ["/sbin/init"]
Looking for a geerlingguy.
Related
I have Jenkins deployed on kubernetes (AWS EKS), and a node designated for the jenkins pipelines tasks.
I have a pipeline which I want to build a docker image, so here is how my pipelines looks:
pipeline {
agent {
kubernetes {
defaultContainer 'jnlp'
yaml """
apiVersion: v1
kind: Pod
spec:
nodeSelector:
illumex.ai/noderole: jenkins-worker
containers:
- name: docker
image: docker:latest
imagePullPolicy: Always
command:
- cat
tty: true
volumeMounts:
- mountPath: /var/run/docker.sock
name: docker-sock
volumes:
- name: docker-sock
hostPath:
path: /var/run/docker.sock
"""
}
}
stages {
stage('build') {
steps {
container('system') {
sh """
docker system prune -f
"""
However, my job fails with:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I think it's permissions issue. However, since the containers are created as part of the pipeline, so for which user I should give permission?
On your Jenkins machine, ensure docker is installed and the user jenkins is in the docker group.
Installing the docker plugin is not enough.
Same for kubectl must be installed on the Jenkins machine.
FROM jenkins/jenkins
ARG HOST_UID=1004
ARG HOST_GID=999
USER root
RUN apt-get -y update && \
apt-get -y install apt-transport-https ca-certificates curl gnupg-agent software-properties-common && \
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && \
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") $(lsb_release -cs) stable" && \
apt-get update && \
apt-get -y install docker-ce docker-ce-cli containerd.io
RUN curl -L "https://github.com/docker/compose/releases/download/1.26.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose \
&& chmod +x /usr/local/bin/docker-compose \
&& ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose \
&& curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
RUN echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | tee /etc/apt/sources.list.d/kubernetes.list \
&& apt-get -y update \
&& apt install -y kubectl
RUN usermod -u $HOST_UID jenkins
RUN groupmod -g $HOST_GID docker
RUN usermod -aG docker jenkins
FROM base
RUN kubectl version
USER jenkins
I'm trying to run gcsfuse inside my docker container to access my Google Cloud Storage so my program inside the docker can access it. I'm using Google Kuberenetes Engine. My problem is when i run whereis modprobe I get no results, meaning there is no modprobe installed. I've seen this post and this one but they are futile. I've allready ran sudo apt install update && sudo apt install upgrade to upgrade my kernels and also tried simply sudo apt-get install modprobe which results in package not found. I've eddited my deployment.yaml file to include these (I'm deploying throught github actions):
spec:
...
securityContext:
privileged: true
capabilities:
add:
- SYS_ADMIN
- SYS_MODULE
env:
- name: STARTUP_SCRIPT
value: |
#! /bin/bash
modprobe fuse
But these didn't change anything at all. I've seen in a post that i must add something like lib/modules but i allready have a lib file inside my container that my program uses, is there a workaround for that? Am i installing gcsfuse wrong? (Installing gcsfuse was hard normal practices didn't work but in the end we made it work)
Here is my gcsfuse installation:
RUN apt-get update -y && apt-get dist-upgrade -y && apt-get -y install lsb-release curl gnupg && apt -y install lsb-core
ENV GCSFUSE_REPO gcsfuse-stretch
RUN apt-get update -y && apt-get install -y --no-install-recommends apt-transport-https ca-certificates curl gnupg
RUN echo "deb http://packages.cloud.google.com/apt $GCSFUSE_REPO main" | tee /etc/apt/sources.list.d/gcsfuse.list
RUN echo "deb https://packages.cloud.google.com/apt cloud-sdk main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list
RUN curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
# Install gcsfuse and google cloud sdk
RUN apt-get update -y && apt-get install -y gcsfuse google-cloud-sdk \
&& apt-get autoremove -y \
&& apt-get clean -y \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
This is a continuation of another error I had whilst trying to run gcsfuse, I realised i don't have modprobe and asked this question. Ubuntu image 22.10
Edit: This questions says I must add stuff to a yaml file but I'm not sure which yaml file I must add those to. Since im using github actions I have deployment.yaml, kustomization.yaml and service.yaml
When you provide the following as you job yaml file, it will provide the correct privileges for the job you are creating rather then deployments.yaml file :
apiVersion: batch/v1
kind: Job
metadata:
# Unique key of the Job instance
name: job-40
spec:
template:
metadata:
name: job-40
spec:
containers:
- name: nginx-1
image: gcr.io/videoo2/github.com/...
command: ["/bin/sleep"]
args: ["1000"]
securityContext:
privileged: true
capabilities:
add:
- SYS_ADMIN
# Do not restart containers after they exit
restartPolicy: Never
# of retries before marking as failed.
backoffLimit: 0
I've tried to install a docker based OnlyOffice document server via a docker-compose.yml and dockerfile. I've got the standard installation files via
git clone https://github.com/ONLYOFFICE/Docker-DocumentServer
System enviroment:
OS: openSUSE Leap 15.3
Docker version 20.10.14-ce, build 87a90dc786bd
docker-compose version 1.27.4, build 40524192
By building up the onlyoffice-documentserver I've got the following error message by dockerfile step Step 5/15:
ERROR: Service 'onlyoffice-documentserver' failed to build : failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "/bin/sh": stat /bin/sh: no such file or directory: unknown
My docker compose file:
version: '2'
services:
onlyoffice-documentserver:
build:
context: .
container_name: onlyoffice-documentserver
depends_on:
- onlyoffice-postgresql
- onlyoffice-rabbitmq
environment:
- DB_TYPE=postgres
- DB_HOST=onlyoffice-postgresql
- DB_PORT=5432
- DB_NAME=onlyoffice
- DB_USER=onlyoffice
- AMQP_URI=amqp://guest:guest#onlyoffice-rabbitmq
# Uncomment strings below to enable the JSON Web Token validation.
#- JWT_ENABLED=true
#- JWT_SECRET=secret
#- JWT_HEADER=Authorization
#- JWT_IN_BODY=true
ports:
- '2085:80'
stdin_open: true
restart: always
stop_grace_period: 60s
volumes:
- /var/www/onlyoffice/Data
- /var/log/onlyoffice
- /var/lib/onlyoffice/documentserver/App_Data/cache/files
- /var/www/onlyoffice/documentserver-example/public/files
- /usr/share/fonts
onlyoffice-rabbitmq:
container_name: onlyoffice-rabbitmq
image: rabbitmq
restart: always
expose:
- '5672'
onlyoffice-postgresql:
container_name: onlyoffice-postgresql
image: postgres:9.5
environment:
- POSTGRES_DB=onlyoffice
- POSTGRES_USER=onlyoffice
- POSTGRES_HOST_AUTH_METHOD=trust
restart: always
expose:
- '5432'
volumes:
- postgresql_data:/var/lib/postgresql
volumes:
postgresql_data:
My dockerfile content:
ROM ubuntu:20.04
LABEL maintainer Ascensio System SIA <support#onlyoffice.com>
ENV LANG=en_US.UTF-8 LANGUAGE=en_US:en LC_ALL=en_US.UTF-8 DEBIAN_FRONTEND=noninteractive PG_VERSION=12
ARG ONLYOFFICE_VALUE=onlyoffice
RUN echo "#!/bin/sh\nexit 0" > /usr/sbin/policy-rc.d && \
apt-get -y update && \
apt-get -yq install wget apt-transport-https gnupg locales && \
mkdir -p $HOME/.gnupg && \
gpg --no-default-keyring --keyring gnupg-ring:/etc/apt/trusted.gpg.d/onlyoffice.gpg --keyserver keyserver.ubuntu.com --recv-keys 0x8320ca65cb2de8e5 && \
chmod 644 /etc/apt/trusted.gpg.d/onlyoffice.gpg && \
locale-gen en_US.UTF-8 && \
echo ttf-mscorefonts-installer msttcorefonts/accepted-mscorefonts-eula select true | debconf-set-selections && \
apt-get -yq install \
adduser \
apt-utils \
bomstrip \
certbot \
curl \
gconf-service \
htop \
libasound2 \
libboost-regex-dev \
libcairo2 \
libcurl3-gnutls \
libcurl4 \
libgtk-3-0 \
libnspr4 \
libnss3 \
libstdc++6 \
libxml2 \
libxss1 \
libxtst6 \
mysql-client \
nano \
net-tools \
netcat-openbsd \
nginx-extras \
postgresql \
postgresql-client \
pwgen \
rabbitmq-server \
redis-server \
software-properties-common \
sudo \
supervisor \
ttf-mscorefonts-installer \
xvfb \
zlib1g && \
if [ $(ls -l /usr/share/fonts/truetype/msttcorefonts | wc -l) -ne 61 ]; \
then echo 'msttcorefonts failed to download'; exit 1; fi && \
echo "SERVER_ADDITIONAL_ERL_ARGS=\"+S 1:1\"" | tee -a /etc/rabbitmq/rabbitmq-env.conf && \
sed -i "s/bind .*/bind 127.0.0.1/g" /etc/redis/redis.conf && \
sed 's|\(application\/zip.*\)|\1\n application\/wasm wasm;|' -i /etc/nginx/mime.types && \
pg_conftool $PG_VERSION main set listen_addresses 'localhost' && \
service postgresql restart && \
sudo -u postgres psql -c "CREATE DATABASE $ONLYOFFICE_VALUE;" && \
sudo -u postgres psql -c "CREATE USER $ONLYOFFICE_VALUE WITH password '$ONLYOFFICE_VALUE';" && \
sudo -u postgres psql -c "GRANT ALL privileges ON DATABASE $ONLYOFFICE_VALUE TO $ONLYOFFICE_VALUE;" && \
service postgresql stop && \
service redis-server stop && \
service rabbitmq-server stop && \
service supervisor stop && \
service nginx stop && \
rm -rf /var/lib/apt/lists/*
COPY config /app/ds/setup/config/
COPY run-document-server.sh /app/ds/run-document-server.sh
EXPOSE 80 443
ARG COMPANY_NAME=onlyoffice
ARG PRODUCT_NAME=documentserver
ARG PACKAGE_URL="http://download.onlyoffice.com/install/documentserver/linux/${COMPANY_NAME}-${PRODUCT_NAME}_amd64.deb"
ENV COMPANY_NAME=$COMPANY_NAME \
PRODUCT_NAME=$PRODUCT_NAME
RUN wget -q -P /tmp "$PACKAGE_URL" && \
apt-get -y update && \
service postgresql start && \
apt-get -yq install /tmp/$(basename "$PACKAGE_URL") && \
service postgresql stop && \
service supervisor stop && \
chmod 755 /app/ds/*.sh && \
rm -f /tmp/$(basename "$PACKAGE_URL") && \
rm -rf /var/log/$COMPANY_NAME && \
rm -rf /var/lib/apt/lists/*
VOLUME /var/log/$COMPANY_NAME /var/lib/$COMPANY_NAME /var/www/$COMPANY_NAME/Data /var/lib/postgresql /var/lib/rabbitmq /var/lib/redis /usr/share/fonts/truetype/custom
ENTRYPOINT ["/app/ds/run-document-server.sh"]
The hole installation log:
docker-compose up -d
Creating network "docker-documentserver_default" with the default driver
Creating volume "docker-documentserver_postgresql_data" with default driver
Pulling onlyoffice-rabbitmq (rabbitmq:)...
latest: Pulling from library/rabbitmq
d5fd17ec1767: Already exists
921d0bdeed9f: Pull complete
ffce2faba222: Pull complete
9b507bebfd9c: Pull complete
789518776d97: Pull complete
fdc5e6a90731: Pull complete
f703023f15bd: Pull complete
858b7223a344: Pull complete
df8ec9fdae09: Pull complete
Digest: sha256:c14cd855625a3fab10e24abcd0511d1c62c411c66f16b9beb92b3477f3ebcb95
Status: Downloaded newer image for rabbitmq:latest
Pulling onlyoffice-postgresql (postgres:9.5)...
9.5: Pulling from library/postgres
fa1690ae9228: Pull complete
a73f6e07b158: Pull complete
973a0c44ddba: Pull complete
07e5342b01d4: Pull complete
578aad0862c9: Pull complete
a0b157088f7a: Pull complete
6c9046f06fc5: Pull complete
ae19407bdc48: Pull complete
e53b7c20aa96: Pull complete
a135edcc0831: Pull complete
fed07b1b1b94: Pull complete
18d9026fcfbd: Pull complete
4d2d5fae97d9: Pull complete
d419466e642d: Pull complete
Digest: sha256:75ebf479151a8fd77bf2fed46ef76ce8d518c23264734c48f2d1de42b4eb40ae
Status: Downloaded newer image for postgres:9.5
Building onlyoffice-documentserver
Step 1/15 : FROM ubuntu:20.04
20.04: Pulling from library/ubuntu
d5fd17ec1767: Already exists
Digest: sha256:47f14534bda344d9fe6ffd6effb95eefe579f4be0d508b7445cf77f61a0e5724
Status: Downloaded newer image for ubuntu:20.04
---> 53df61775e88
Step 2/15 : LABEL maintainer Ascensio System SIA <support#onlyoffice.com>
---> Running in 23bc8d147d67
Removing intermediate container 23bc8d147d67
---> d256ca65fcbd
Step 3/15 : ENV LANG=en_US.UTF-8 LANGUAGE=en_US:en LC_ALL=en_US.UTF-8 DEBIAN_FRONTEND=noninteractive PG_VERSION=12
---> Running in de846cb9b6ab
Removing intermediate container de846cb9b6ab
---> 57fd532b8b95
Step 4/15 : ARG ONLYOFFICE_VALUE=onlyoffice
---> Running in d920bae77f3b
Removing intermediate container d920bae77f3b
---> 4650e5fce102
Step 5/15 : RUN echo "#!/bin/sh\nexit 0" > /usr/sbin/policy-rc.d && apt-get -y update && apt-get -yq install wget apt-transport-https gnupg locales && mkdir -p $HOME/.gnupg && gpg --no-default-keyring --keyring gnupg-ring:/etc/apt/trusted.gpg.d/onlyoffice.gpg --keyserver keyserver.ubuntu.com --recv-keys 0x8320ca65cb2de8e5 && chmod 644 /etc/apt/trusted.gpg.d/onlyoffice.gpg && locale-gen en_US.UTF-8 && echo ttf-mscorefonts-installer msttcorefonts/accepted-mscorefonts-eula select true | debconf-set-selections && apt-get -yq install adduser apt-utils bomstrip certbot curl gconf-service htop libasound2 libboost-regex-dev libcairo2 libcurl3-gnutls libcurl4 libgtk-3-0 libnspr4 libnss3 libstdc++6 libxml2 libxss1 libxtst6 mysql-client nano net-tools netcat-openbsd nginx-extras postgresql postgresql-client pwgen rabbitmq-server redis-server software-properties-common sudo supervisor ttf-mscorefonts-installer xvfb zlib1g && if [ $(ls -l /usr/share/fonts/truetype/msttcorefonts | wc -l) -ne 61 ]; then echo 'msttcorefonts failed to download'; exit 1; fi && echo "SERVER_ADDITIONAL_ERL_ARGS=\"+S 1:1\"" | tee -a /etc/rabbitmq/rabbitmq-env.conf && sed -i "s/bind .*/bind 127.0.0.1/g" /etc/redis/redis.conf && sed 's|\(application\/zip.*\)|\1\n application\/wasm wasm;|' -i /etc/nginx/mime.types && pg_conftool $PG_VERSION main set listen_addresses 'localhost' && service postgresql restart && sudo -u postgres psql -c "CREATE DATABASE $ONLYOFFICE_VALUE;" && sudo -u postgres psql -c "CREATE USER $ONLYOFFICE_VALUE WITH password '$ONLYOFFICE_VALUE';" && sudo -u postgres psql -c "GRANT ALL privileges ON DATABASE $ONLYOFFICE_VALUE TO $ONLYOFFICE_VALUE;" && service postgresql stop && service redis-server stop && service rabbitmq-server stop && service supervisor stop && service nginx stop && rm -rf /var/lib/apt/lists/*
---> Running in 521985bcc74a
ERROR: Service 'onlyoffice-documentserver' failed to build : failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "/bin/sh": stat /bin/sh: no such file or directory: unknown
Thanks in advance.
If you are running on a Windows machine and encountering issues running a shell script, you can try the following steps to resolve the problem:
Convert the file format to UNIX style using the dos2unix command. This can be done by running the following command in the terminal: dos2unix your-file.sh
If you do not have access to a Linux system, you can use Git Bash for Windows, which comes with a dos2unix.exe command.
Run the docker compose command in the Git Bash terminal. This will ensure that the script is executed in the correct environment.
See the tutorial,How to fix “exec user process caused „no such file or directory“” in Docker
I have a container I'm deploying to Kubernetes (GKE), and the image I have built locally is good, and runs as expected, but it appears that the image being pulled from Google Container Registry, when the run command is changed to pwd && ls returns the output shown here:
I 2020-06-17T16:24:54.222382706Z /app
I 2020-06-17T16:24:54.226108583Z lost+found
I 2020-06-17T16:24:54.226143620Z package-lock.json
and the output of the same commands when running in the container locally, with docker run -it <container:tag> bash is this:
#${API_CONTAINER} resolves to gcr.io/<project>/container: I.E. tag gets appended
.../# docker run -it ${API_CONTAINER}latest bash
root#362737147de4:/app# pwd
/app
root#362737147de4:/app# ls
Dockerfile dist files node_modules package.json ssh.bat stop_forever.bat test tsconfig.json
cloudbuild.yaml environments log package-lock.json src startApi.sh swagger.json test.pdf tsconfig.test.json
root#362737147de4:/app#
My thoughts on this start with, either the push to the registry is literally failing to work, or I'm not pulling the right one, i.e. pulling some off latest tag that was build by cloud build in a previous attempt to get this going.
What could be the potential issue? what could potentially fix this issue?
Edit: After using differing tags in deployment, using --no-cache during build, and pulling from the registry on another machine, my inclination is that GKE is having an issue pulling the image from GCR. Is there a way I can put this somewhere else, or get visibility on what's going on with the pull?
EDIT 2:
So Yes, I have a docker file I can share, but please be aware that I have inherited it, and don't understand the process that went into building this, or why some steps were necessary to the other developer. (I am definitely interested in refactoring this as much as possible.
FROM node:8.12.0
RUN mkdir /app
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
RUN apt-get update && apt-get install snmp -y
RUN npm install --unsafe-perm=true
RUN apt-get update \
&& apt-get install -y \
gconf-service \
libasound2 \
libatk1.0-0 \
libatk-bridge2.0-0 \
libc6 \
libcairo2 \
libcups2 \
libdbus-1-3 \
libexpat1 \
libfontconfig1 \
libgcc1 \
libgconf-2-4 \
libgdk-pixbuf2.0-0 \
libglib2.0-0 \
libgtk-3-0 \
libnspr4 \
libpango-1.0-0 \
libpangocairo-1.0-0 \
libstdc++6 \
libx11-6 \
libx11-xcb1 \
libxcb1 \
libxcomposite1 \
libxcursor1 \
libxdamage1 \
libxext6 \
libxfixes3 \
libxi6 \
libxrandr2 \
libxrender1 \
libxss1 \
libxtst6 \
ca-certificates \
fonts-liberation \
libappindicator1 \
libnss3 \
lsb-release \
xdg-utils \
wget
COPY . /app
# Installing puppeteer and chromium for generating PDF of the invoices.
# Install latest chrome dev package and fonts to support major charsets (Chinese, Japanese, Arabic, Hebrew, Thai and a few others)
# Note: this installs the necessary libs to make the bundled version of Chromium that Puppeteer
# installs, work.
RUN apt-get update \
&& apt-get install -y wget gnupg libpam-cracklib \
&& wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list' \
&& apt-get update \
&& apt-get install -y google-chrome-unstable fonts-ipafont-gothic fonts-wqy-zenhei fonts-thai-tlwg fonts-kacst fonts-freefont-ttf \
--no-install-recommends \
&& rm -rf /var/lib/apt/lists/*
# Uncomment to skip the chromium download when installing puppeteer. If you do,
# you'll need to launch puppeteer with:
# browser.launch({executablePath: 'google-chrome-unstable'})
# ENV PUPPETEER_SKIP_CHROMIUM_DOWNLOAD true
# Install puppeteer so it's available in the container.
RUN npm i puppeteer \
# Add user so we don't need --no-sandbox.
# same layer as npm install to keep re-chowned files from using up several hundred MBs more space
&& groupadd -r pptruser && useradd -r -g pptruser -G audio,video pptruser \
&& mkdir -p /home/pptruser/Downloads \
&& chown -R pptruser:pptruser /home/pptruser \
&& chown -R pptruser:pptruser /app/node_modules
#build the api, and move into place.... framework options are limited with the build.
RUN npm i puppeteer kiwi-server-cli && kc build -e prod
RUN rm -Rf ./environments & rm -Rf ./src && cp -R ./dist/prod/* .
# Run everything after as non-privileged user.
# USER pptruser
CMD ["google-chrome-unstable"] # I have tried adding this here as well "&&", "node", "src/server.js"
For pushing the image I'm using this command:
docker push gcr.io/<projectid>/api:latest-<version> and I have the credentials setup with cloud auth configure-docker and here's a sanitized version of the yaml manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -f ./docker-compose.yml
kompose.version: 1.21.0 ()
creationTimestamp: null
labels:
io.kompose.service: api
name: api
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: api
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert -f ./docker-compose.yml
kompose.version: 1.21.0 ()
creationTimestamp: null
labels:
io.kompose.service: api
spec:
containers:
- args:
- bash
- -c
- node src/server.js
env:
- name: NODE_ENV
value: production
- name: TZ
value: America/New_York
image: gcr.io/<projectId>/api:latest-0.0.9
imagePullPolicy: Always
name: api
ports:
- containerPort: 8087
resources: {}
volumeMounts:
- mountPath: /app
name: api-claim0
- mountPath: /files
name: api-claim1
restartPolicy: Always
serviceAccountName: ""
volumes:
- name: api-claim0
persistentVolumeClaim:
claimName: api-claim0
- name: api-claim1
persistentVolumeClaim:
claimName: api-claim1
status: {}
The solution comes from the original intent of the docker-compose.yml file which was converted into a kubernetes manifest via a tool called kompose. The original docker-compose file was intended for development and as such had overrides in place to push the local development environment into the running container.
This was because of this in the yml file:
services:
api:
build: ./api
volumes:
- ./api:/app
- ./api/files:/files
which translates to this on the kubernetes manifest:
volumeMounts:
- mountPath: /app
name: api-claim0
- mountPath: /files
name: api-claim1
volumes:
- name: api-claim0
persistentVolumeClaim:
claimName: api-claim0
- name: api-claim1
persistentVolumeClaim:
claimName: api-claim1
Which Kubernetes has no files to supply, and the app is essentially overwritten with an empty volume, so the file is not found.
removal of the directives in the kubernetes manifest resulted in success.
Reminder to us all to be mindful.
To manage images [1] includes listing images in a repository, adding tags, deleting tags, copying images to a new repository, and deleting images. I hope the troubleshooting documents [2] could be helpful for you to troubleshoot the issue.
[1] https://cloud.google.com/container-registry/docs/managing
[2] https://cloud.google.com/container-registry/docs/troubleshooting
I am trying to Dockerize a ansible playbook that runs a MySQL dump.
---
- name: MySQL Dump
hosts: lh
connection: local
vars_files:
- ./vars/sql-credentials.yml
- ./vars/aws-credentials.yml
tasks:
- name: Create Folder To Store
file:
state: directory
path: ./backups
- name: Get Dump from MySQL DB
mysql_db:
name: "{{ db_name }}"
config_file: .my.cnf
login_host: "{{ db_login_host }}"
login_user: "{{ db_login_user }}"
login_password: "{{ db_login_password }}"
state: dump
target: "backups/z-{{ ansible_date_time.date }}.sql.gz"
Executing the playbook outside of the container, I get the sql dump file that I am expecting (dump file is about 80mb). When the playbook runs inside the container, it does the playbook finishes early (almost immediately after task was listed) without any errors, the output of the file is 20kb.
FROM ubuntu:bionic
RUN apt-get update && \
apt-get dist-upgrade -y && \
apt-get install -y software-properties-common systemd && \
rm -rf /var/lib/apt/lists/*
RUN apt-add-repository -y ppa:ansible/ansible && apt-get update && apt-get install -y \
git \
ansible \
mysql-client \
python-pip \
python-dev \
build-essential \
&& rm -rf /var/lib/apt/lists/*
COPY . .
RUN pip install -r requirements.txt
RUN echo "[local]\nlocalhost ansible_connection=local" > /etc/ansible/hosts
CMD [ "ansible-playbook", "-i", "inventory", "playbook.yml", "-vvvv" ]
I've tried running it in verbose mode inside the container, and I'm not getting any errors at all.
The reason why I want to have this run in a container is because I want Travis CI to execute this dump in a CRON and then deploy it to a S3 bucket.