Run ansible-playbook in Jenkins container - docker

Question: Is it possible to have a docker-compose file to run ansible-playbook in a Jenkins container?
Summary:
I have a Jenkins container (containerA) that I would like to run ansible-playbook. However, since the containers stop after the execution, I don't know how to define a non-running container in docker-compose.
I have posted the output for docker ps -a, the docker-compose and the Dockerfile for ansible-playbook
Please let me know if my question is unclear.
PG
root#jenkins1:~# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c47a4ee06d71 jenkins/jenkins "/sbin/tini -- /usr/…" 2 months ago Up 2 months 0.0.0.0:50000->50000/tcp, 0.0.0.0:80->8080/tcp, 0.0.0.0:443->8443/tcp jenkins1
956309ae7370 foo/ansible "ansible-playbook" 2 months ago Exited (2) 2 months ago hopeful_hypatia
cat /opt/docker_jenkins/docker-compose.yml
version: '3.2'
services:
jenkins:
restart: always
image: 'jenkins/jenkins'
container_name: jenkins1
ports:
- '80:8080'
- '443:8443'
- '50000:50000'
volumes:
- type: volume
source: jenkins_home
target: /var/jenkins_home
volume:
nocopy: true
- type: bind
source: /var/lib/bin
target: /root/.local/bin
volumes:
jenkins_home:
FROM alpine:3.7
ENV ANSIBLE_VERSION 2.8.6
ENV BUILD_PACKAGES \
bash \
curl \
tar \
openssh-client \
sshpass \
git \
python \
py-boto \
py-dateutil \
py-httplib2 \
py-jinja2 \
py-paramiko \
py-pip \
py-yaml \
ca-certificates
# If installing ansible#testing
#RUN \
# echo "#testing http://nl.alpinelinux.org/alpine/edge/testing" >> #/etc/apk/repositories
RUN set -x && \
\
echo "==> Adding build-dependencies..." && \
apk --update add --virtual build-dependencies \
gcc \
musl-dev \
libffi-dev \
openssl-dev \
python-dev && \
\
echo "==> Upgrading apk and system..." && \
apk update && apk upgrade && \
\
echo "==> Adding Python runtime..." && \
apk add --no-cache ${BUILD_PACKAGES} && \
pip install --upgrade pip && \
pip install python-keyczar docker-py && \
\
echo "==> Installing Ansible..." && \
pip install ansible==${ANSIBLE_VERSION} && \
\
echo "==> Cleaning up..." && \
apk del build-dependencies && \
rm -rf /var/cache/apk/* && \
\
echo "==> Adding hosts for convenience..." && \
mkdir -p /etc/ansible /ansible && \
echo "[local]" >> /etc/ansible/hosts && \
echo "localhost" >> /etc/ansible/hosts
ENV ANSIBLE_GATHERING smart
ENV ANSIBLE_HOST_KEY_CHECKING false
ENV ANSIBLE_RETRY_FILES_ENABLED false
ENV ANSIBLE_ROLES_PATH /ansible/playbooks/roles
ENV ANSIBLE_SSH_PIPELINING True
ENV PYTHONPATH /ansible/lib
ENV PATH /ansible/bin:$PATH
ENV ANSIBLE_LIBRARY /ansible/library
WORKDIR /ansible/playbooks
ENTRYPOINT ["ansible-playbook"]

the docker container will be up only if there is any process or service that is holding that container to run.
In your docker file you are executing ansible-playbook command this will error out stating ERROR! You must specify a playbook file to run along with the help options.
If you want to execute ansible playbook you have to pass more arguments
A successful playbook execution happens like.
ansible-playbook -i <inventory_file> <playbook_name>

Related

Laravel Sail with Github actions: How to run artisan and composer commands on container start

I'm using Laravel sail and deploy the app on VPS using Github actions and need to know where should I run artisan and composer commands on container start. Is it here on Github actions file? (the last 3 commands) docker exec -it laravel9-dashboard-laravel-1 bash (to gain access to the container), php artisan optimize:clear, composer dump-autoload
# Article: https://medium.com/rockedscience/docker-ci-cd-pipeline-with-github-actions-6d4cd1731030
name: Build and push Docker image upon release
on:
# Build and push Docker images *only* for releases.
push:
branches:
- main
pull_request:
branches:
- main
jobs:
run-linters:
name: Run linters
runs-on: ubuntu-latest
steps:
- name: Check out Git repository
uses: actions/checkout#v2
- name: Set up PHP
uses: shivammathur/setup-php#v2
with:
php-version: "8.1"
coverage: none
tools: phpcs
- name: Run linters
uses: wearerequired/lint-action#v2
with:
php_codesniffer: true
# Optional: Ignore warnings
php_codesniffer_args: "-n"
ssh:
name: Build SSH
runs-on: ubuntu-latest
steps:
- name: executing remote ssh commands using password
uses: appleboy/ssh-action#master
with:
host: ${{ secrets.VULTR_HOST }}
username: ${{ secrets.VULTR_USERNAME }}
password: ${{ secrets.VULTR_PASSWORD }}
port: ${{ secrets.VULTR_PORT }}
script: |
cd laravel9-dashboard
docker-compose down
docker container prune -y
git pull origin main
docker build -t laravel9-dashboard . --no-cache
docker-compose up -d --remove-orphans
docker exec -it laravel9-dashboard-laravel-1 bash
php artisan optimize:clear
composer dump-autoload
Or should I add these commands in Docker file
FROM ubuntu:22.04
LABEL maintainer="Taylor Otwell"
ARG NODE_VERSION=16
ARG POSTGRES_VERSION=14
WORKDIR /var/www/html
ENV DEBIAN_FRONTEND noninteractive
ENV TZ=UTC
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apt-get update \
&& apt-get install -y cron \
&& apt-get install -y gnupg gosu curl ca-certificates zip unzip git supervisor sqlite3 libcap2-bin libpng-dev python2 \
&& mkdir -p ~/.gnupg \
&& chmod 600 ~/.gnupg \
&& echo "disable-ipv6" >> ~/.gnupg/dirmngr.conf \
&& echo "keyserver hkp://keyserver.ubuntu.com:80" >> ~/.gnupg/dirmngr.conf \
&& gpg --recv-key 0x14aa40ec0831756756d7f66c4f4ea0aae5267a6c \
&& gpg --export 0x14aa40ec0831756756d7f66c4f4ea0aae5267a6c > /usr/share/keyrings/ppa_ondrej_php.gpg \
&& echo "deb [signed-by=/usr/share/keyrings/ppa_ondrej_php.gpg] https://ppa.launchpadcontent.net/ondrej/php/ubuntu jammy main" > /etc/apt/sources.list.d/ppa_ondrej_php.list \
&& apt-get update \
&& apt-get install -y php8.1-cli php8.1-dev \
php8.1-pgsql php8.1-sqlite3 php8.1-gd \
php8.1-curl \
php8.1-imap php8.1-mysql php8.1-mbstring \
php8.1-xml php8.1-zip php8.1-bcmath php8.1-soap \
php8.1-intl php8.1-readline \
php8.1-ldap \
php8.1-msgpack php8.1-igbinary php8.1-redis php8.1-swoole \
php8.1-memcached php8.1-pcov php8.1-xdebug \
&& php -r "readfile('https://getcomposer.org/installer');" | php -- --install-dir=/usr/bin/ --filename=composer \
&& curl -sLS https://deb.nodesource.com/setup_$NODE_VERSION.x | bash - \
&& apt-get install -y nodejs \
&& npm install -g npm \
&& curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | gpg --dearmor | tee /usr/share/keyrings/yarn.gpg >/dev/null \
&& echo "deb [signed-by=/usr/share/keyrings/yarn.gpg] https://dl.yarnpkg.com/debian/ stable main" > /etc/apt/sources.list.d/yarn.list \
&& curl -sS https://www.postgresql.org/media/keys/ACCC4CF8.asc | gpg --dearmor | tee /usr/share/keyrings/pgdg.gpg >/dev/null \
&& echo "deb [signed-by=/usr/share/keyrings/pgdg.gpg] http://apt.postgresql.org/pub/repos/apt jammy-pgdg main" > /etc/apt/sources.list.d/pgdg.list \
&& apt-get update \
&& apt-get install -y yarn \
&& apt-get install -y mysql-client \
&& apt-get install -y postgresql-client-$POSTGRES_VERSION \
&& apt-get -y autoremove \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
RUN setcap "cap_net_bind_service=+ep" /usr/bin/php8.1
RUN groupadd --force -g 1000 sail
RUN useradd -ms /bin/bash --no-user-group -g 1000 -u 1337 sail
COPY scheduler /etc/cron.d/scheduler
RUN chmod 0644 /etc/cron.d/scheduler \
&& crontab /etc/cron.d/scheduler
COPY start-container /usr/local/bin/start-container
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
COPY php.ini /etc/php/8.1/cli/conf.d/99-sail.ini
RUN chmod +x /usr/local/bin/start-container
EXPOSE 8000
ENTRYPOINT ["start-container"]
I'm not Docker expert I'm just trying to learn what I need to deploy my app using Docker and Github actions

Super Slow to public application on Docker

I researched everything and I can't find a solution.
I'm running the docker-compose compilation that takes 10 to 15 minutes to go up in production, it's crazy. I've tried everything, tried to erase the volume, tried to clean the entire system, changed the network connection, uninstalled / reinstalled the Docker, ran a complete docker system prune -a - nothing.
I don't know if it is because of this delay, but it seems that even the dns gets lost and cannot find the application after all this time. I have to press ctrl + f5 to load the new update.
But this time is driving me crazy, 15 minutes to get everything up.
I appreciate any help.
.yml:
version: '3'
volumes:
app-volume:
services:
nginx:
container_name: nginx
restart: always
build:
context: ./nginx/
dockerfile: ./Dockerfile
depends_on:
- app
volumes:
- app-volume:/app
env_file:
- ./nginx/.envs/nginx.env
ports:
- "80:80"
- "443:443"
app:
container_name: app
build:
context: ./app/
dockerfile: ./compose/production/Dockerfile
image: app
volumes:
# - ./app/:/app
- app-volume:/home/app
env_file:
- ./app/.envs/.production/app.env
app Dockefile:
FROM node:12-alpine as builder
WORKDIR /home/app
RUN apk update && \
apk add --no-cache python make g++ iputils
RUN npm install -g #angular/cli
COPY package*.json ./
COPY *.lock ./
RUN yarn install --unsafe-perm
COPY . .
ARG FORCE_BUILD_ENV=1
ARG ENV_BUILD=0
RUN export $(grep -v '^#' .envs/.production/app.env | xargs) && \
npm run build:env && \
ls -lah && \
cat ./src/environments/environment.ts && \
cat ./src/environments/environment.prod.ts
CMD ["ng", "build", "--prod", "--outputPath=dist"]
nginx/Dockerfile:
FROM alpine:3.10
LABEL maintainer="NGINX Docker Maintainers <docker-maint#nginx.com>"
ENV NGINX_VERSION 1.16.1
ENV NGX_BROTLI_COMMIT e505dce68acc190cc5a1e780a3b0275e39f160ca
RUN GPG_KEYS=B0F4253373F8F6F510D42178520A9993A1C052F8 \
&& CONFIG="\
--prefix=/etc/nginx \
--sbin-path=/usr/sbin/nginx \
--modules-path=/usr/lib/nginx/modules \
--conf-path=/etc/nginx/nginx.conf \
--error-log-path=/var/log/nginx/error.log \
--http-log-path=/var/log/nginx/access.log \
--pid-path=/var/run/nginx.pid \
--lock-path=/var/run/nginx.lock \
--http-client-body-temp-path=/var/cache/nginx/client_temp \
--http-proxy-temp-path=/var/cache/nginx/proxy_temp \
--http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp \
--http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp \
--http-scgi-temp-path=/var/cache/nginx/scgi_temp \
--user=nginx \
--group=nginx \
--with-http_ssl_module \
--with-http_realip_module \
--with-http_addition_module \
--with-http_sub_module \
--with-http_dav_module \
--with-http_flv_module \
--with-http_mp4_module \
--with-http_gunzip_module \
--with-http_gzip_static_module \
--with-http_random_index_module \
--with-http_secure_link_module \
--with-http_stub_status_module \
--with-http_auth_request_module \
--with-http_xslt_module=dynamic \
--with-http_image_filter_module=dynamic \
--with-http_geoip_module=dynamic \
--with-http_perl_module=dynamic \
--with-threads \
--with-stream \
--with-stream_ssl_module \
--with-stream_ssl_preread_module \
--with-stream_realip_module \
--with-stream_geoip_module=dynamic \
--with-http_slice_module \
--with-mail \
--with-mail_ssl_module \
--with-compat \
--with-file-aio \
--with-http_v2_module \
--add-module=/usr/src/ngx_brotli \
" \
&& addgroup -S nginx \
&& adduser -D -S -h /var/cache/nginx -s /sbin/nologin -G nginx nginx \
&& apk add --no-cache --virtual .build-deps \
gcc \
libc-dev \
make \
openssl-dev \
pcre-dev \
zlib-dev \
linux-headers \
curl \
gnupg1 \
libxslt-dev \
gd-dev \
geoip-dev \
perl-dev \
&& apk add --no-cache --virtual .brotli-build-deps \
autoconf \
libtool \
automake \
git \
g++ \
cmake \
&& mkdir -p /usr/src \
&& cd /usr/src \
&& git clone --recursive https://github.com/google/ngx_brotli.git \
&& cd ngx_brotli \
&& git checkout -b $NGX_BROTLI_COMMIT $NGX_BROTLI_COMMIT \
&& cd .. \
&& curl -fSL https://nginx.org/download/nginx-$NGINX_VERSION.tar.gz -o nginx.tar.gz \
&& curl -fSL https://nginx.org/download/nginx-$NGINX_VERSION.tar.gz.asc -o nginx.tar.gz.asc \
&& sha512sum nginx.tar.gz nginx.tar.gz.asc \
&& export GNUPGHOME="$(mktemp -d)" \
&& gpg --keyserver ipv4.pool.sks-keyservers.net --recv-keys "$GPG_KEYS" \
&& gpg --batch --verify nginx.tar.gz.asc nginx.tar.gz \
&& rm -rf "$GNUPGHOME" nginx.tar.gz.asc \
&& mkdir -p /usr/src \
&& tar -zxC /usr/src -f nginx.tar.gz \
&& rm nginx.tar.gz \
&& cd /usr/src/nginx-$NGINX_VERSION \
&& ./configure $CONFIG --with-debug \
&& make -j$(getconf _NPROCESSORS_ONLN) \
&& mv objs/nginx objs/nginx-debug \
&& mv objs/ngx_http_xslt_filter_module.so objs/ngx_http_xslt_filter_module-debug.so \
&& mv objs/ngx_http_image_filter_module.so objs/ngx_http_image_filter_module-debug.so \
&& mv objs/ngx_http_geoip_module.so objs/ngx_http_geoip_module-debug.so \
&& mv objs/ngx_http_perl_module.so objs/ngx_http_perl_module-debug.so \
&& mv objs/ngx_stream_geoip_module.so objs/ngx_stream_geoip_module-debug.so \
&& ./configure $CONFIG \
&& make -j$(getconf _NPROCESSORS_ONLN) \
&& make install \
&& rm -rf /etc/nginx/html/ \
&& mkdir /etc/nginx/conf.d/ \
&& mkdir -p /usr/share/nginx/html/ \
&& install -m644 html/index.html /usr/share/nginx/html/ \
&& install -m644 html/50x.html /usr/share/nginx/html/ \
&& install -m755 objs/nginx-debug /usr/sbin/nginx-debug \
&& install -m755 objs/ngx_http_xslt_filter_module-debug.so /usr/lib/nginx/modules/ngx_http_xslt_filter_module-debug.so \
&& install -m755 objs/ngx_http_image_filter_module-debug.so /usr/lib/nginx/modules/ngx_http_image_filter_module-debug.so \
&& install -m755 objs/ngx_http_geoip_module-debug.so /usr/lib/nginx/modules/ngx_http_geoip_module-debug.so \
&& install -m755 objs/ngx_http_perl_module-debug.so /usr/lib/nginx/modules/ngx_http_perl_module-debug.so \
&& install -m755 objs/ngx_stream_geoip_module-debug.so /usr/lib/nginx/modules/ngx_stream_geoip_module-debug.so \
&& ln -s ../../usr/lib/nginx/modules /etc/nginx/modules \
&& strip /usr/sbin/nginx* \
&& strip /usr/lib/nginx/modules/*.so \
&& rm -rf /usr/src/nginx-$NGINX_VERSION \
&& rm -rf /usr/src/ngx_brotli \
\
# Bring in gettext so we can get `envsubst`, then throw
# the rest away. To do this, we need to install `gettext`
# then move `envsubst` out of the way so `gettext` can
# be deleted completely, then move `envsubst` back.
&& apk add --no-cache --virtual .gettext gettext \
&& mv /usr/bin/envsubst /tmp/ \
\
&& runDeps="$( \
scanelf --needed --nobanner /usr/sbin/nginx /usr/lib/nginx/modules/*.so /tmp/envsubst \
| awk '{ gsub(/,/, "\nso:", $2); print "so:" $2 }' \
| sort -u \
| xargs -r apk info --installed \
| sort -u \
)" \
&& apk add --no-cache --virtual .nginx-rundeps tzdata $runDeps \
&& apk del .build-deps \
&& apk del .brotli-build-deps \
&& apk del .gettext \
&& mv /tmp/envsubst /usr/local/bin/ \
\
# forward request and error logs to docker log collector
&& ln -sf /dev/stdout /var/log/nginx/access.log \
&& ln -sf /dev/stderr /var/log/nginx/error.log
RUN echo "http://dl-cdn.alpinelinux.org/alpine/edge/testing" >> /etc/apk/repositories && \
apk update && \
apk add --no-cache openssl brotli vim && \
rm -rf /var/cache/apk/*
RUN mkdir -p /etc/ssl/certs/private/
COPY ./server.crt /etc/ssl/certs/private/siap.crt
COPY ./server.key /etc/ssl/certs/private/siap.key
RUN openssl dhparam -out /etc/ssl/certs/private/dhparam.pem 4096
RUN rm -rf /etc/nginx/conf.d/*
COPY ./scripts/nginx.conf /etc/nginx/nginx.conf
COPY ./scripts/conf.d/brotli.conf /etc/nginx/conf.d/brotli.conf
COPY ./scripts/conf.d/siap.http.conf /etc/nginx/conf.d/siap.http.conf
COPY ./scripts/conf.d/siap.https.conf /etc/nginx/conf.d/siap.https.conf
COPY .envs ./.envs
RUN export $(grep -v '^#' .envs/nginx.env | xargs) && \
sed -i '2s/^.*$/ server '$SERVER_ENDERECO';/g' /etc/nginx/conf.d/siap.https.conf;
EXPOSE 80
STOPSIGNAL SIGTERM
RUN head /etc/nginx/conf.d/siap.https.conf
# RUN nginx -t
CMD ["nginx", "-g", "daemon off;"]
You're rebuilding your application every time you start the container stack. While I wouldn't expect this to usually take 10+ minutes, it's better to build the application just once and built it into a Docker image that can be reused.
You can use a multi-stage build to build the application into static files, and then package it into an Nginx server. The Dockerfile you show is actually most of the way there. You need to change the final CMD to a RUN so it runs during the build, and then add the additional stage to COPY the files into an nginx-based image:
FROM node:12-alpine AS builder
WORKDIR /home/app
... exactly what is in the question ...
RUN ["ng", "build", "--prod", "--outputPath=dist"] # not CMD
FROM alpine:3.10 AS server
... the existing nginx/Dockerfile ...
FROM server
COPY --from=builder /home/app/dist /usr/share/nginx/html
With the extended nginx/Dockerfile in the question, you'll need to do a little bit of additional work to combine these together. You need to COPY files from both the nginx and app subtrees. To make this work, you need to move the Dockerfile up to the top level of your directory tree, and when you COPY files into the image, you need to COPY app/package*.json or COPY nginx/scripts/nginx.conf from one of the subdirectories.
In your docker-compose.yml file, then, you can remove the service that's only used to build files, since this is included in the image-build sequence. You can also remove the volume, again since the compiled application is included in the image.
version: '3.8'
services:
nginx:
build: . # Dockerfile needs to COPY app/... and COPY nginx/...
restart: always
env_file:
- ./nginx/.envs/nginx.env
ports:
- "80:80"
- "443:443"
# no volumes:; do not need to override container_name:
# no app: container
# no top-level volumes:
You can do even better than this in production. If you have access to a Docker registry -- Docker Hub, something your cloud provider offers, something you run yourself -- you can build this image separately, run whatever integration tests you need, and then push it to the registry. Instead of build:, set image: to point to the specific tagged image you need (it's good practice to use a different tag for each build).
services:
nginx:
image: my/nginx:20210524
# no build:
restart: always
env_file: [...]
ports: [...]
Now when you go to do a deploy, build, tag, and push the image, maybe using a CI system for automation. Once you've pushed the image, log into the production machine and change image: to point to the new build, and run docker-compose pull; docker-compose up -d. Compose will pull the updated image and then recreate the container using it. The image contains the prebuilt application, and you pulled it while the old container was already running, so there should be almost no downtime to do this update.

Elastic user password is not working for my docker image

When I am using the elasticsearch official docker image ELASTIC_PASSWORD env variable is working good
docker run -dti -e ELASTIC_PASSWORD=my_own_password -e discovery.type=single-node elasticsearch:7.8.0
But when I build my own customized docker image the ELASTIC_PASSWORD is not working can you please help me on this
Here is my Docker file
FROM ubuntu:18.04
ENV \
REFRESHED_AT=2020-06-20
###############################################################################
# INSTALLATION
###############################################################################
### install prerequisites (cURL, gosu, tzdata, JDK for Logstash)
RUN set -x \
&& apt update -qq \
&& apt install -qqy --no-install-recommends ca-certificates curl gosu tzdata openjdk-11-jdk-headless \
&& apt clean \
&& rm -rf /var/lib/apt/lists/* \
&& gosu nobody true \
&& set +x
### set current package version
ARG ELK_VERSION=7.8.0
### install Elasticsearch
# predefine env vars, as you can't define an env var that references another one in the same block
ENV \
ES_VERSION=${ELK_VERSION} \
ES_HOME=/opt/elasticsearch
ENV \
ES_PACKAGE=elasticsearch-${ES_VERSION}-linux-x86_64.tar.gz \
ES_GID=991 \
ES_UID=991 \
ES_PATH_CONF=/etc/elasticsearch \
ES_PATH_BACKUP=/var/backups \
KIBANA_VERSION=${ELK_VERSION}
RUN DEBIAN_FRONTEND=noninteractive \
&& mkdir ${ES_HOME} \
&& curl -O https://artifacts.elastic.co/downloads/elasticsearch/${ES_PACKAGE} \
&& tar xzf ${ES_PACKAGE} -C ${ES_HOME} --strip-components=1 \
&& rm -f ${ES_PACKAGE} \
&& groupadd -r elasticsearch -g ${ES_GID} \
&& useradd -r -s /usr/sbin/nologin -M -c "Elasticsearch service user" -u ${ES_UID} -g elasticsearch elasticsearch \
&& mkdir -p /var/log/elasticsearch ${ES_PATH_CONF} ${ES_PATH_CONF}/scripts /var/lib/elasticsearch ${ES_PATH_BACKUP}
As I think in order to achieve this functionality (set ELASTIC_PASSWORD from command line and it works) for your own container you need to re-configure Elasticsearch startup script. It's not a trivial task.
For example here is docker-entrypoint.sh from official docker image.
https://github.com/elastic/elasticsearch/blob/master/distribution/docker/src/docker/bin/docker-entrypoint.sh
You can see that script do all 'hidden' work to allow us to run it by only command.

Docker image with aws-cli v2 and dind, based on Alpine:3.11

Hi I'm struggling creating a Docker image with aws-cli v2 and Docker, based on Alpine:3.11
I'm using the following commands:
FROM docker:stable #docker is based on Alpine
RUN apk add curl && \
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" && \
unzip awscliv2.zip && \
./aws/install
RUN aws --version && docker -v
I'm obtaining an output like this:
Step 6/6 : RUN aws --version && docker -v
---> Running in 5015c32e62fe
/bin/sh: aws: Permission denied
The command '/bin/sh -c aws --version && docker -v' returned a non-zero code: 127
This is a strange behavior.
AWS binaries won't work on docker images based on Alpine because they are compiling them against glibc.
Two solutions:
build it from ubuntu:latest
Use this Dockerfile which adds glibc and then removes some stuff
FROM alpine:3.11
ENV GLIBC_VER=2.31-r0
RUN apk --no-cache add \
binutils \
curl \
&& curl -sL https://alpine-pkgs.sgerrand.com/sgerrand.rsa.pub -o /etc/apk/keys/sgerrand.rsa.pub \
&& curl -sLO https://github.com/sgerrand/alpine-pkg-glibc/releases/download/${GLIBC_VER}/glibc-${GLIBC_VER}.apk \
&& curl -sLO https://github.com/sgerrand/alpine-pkg-glibc/releases/download/${GLIBC_VER}/glibc-bin-${GLIBC_VER}.apk \
&& apk add --no-cache \
glibc-${GLIBC_VER}.apk \
glibc-bin-${GLIBC_VER}.apk \
&& curl -sL https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip -o awscliv2.zip \
&& unzip awscliv2.zip \
&& aws/install \
&& rm -rf \
awscliv2.zip \
aws \
/usr/local/aws-cli/v2/*/dist/aws_completer \
/usr/local/aws-cli/v2/*/dist/awscli/data/ac.index \
/usr/local/aws-cli/v2/*/dist/awscli/examples \
&& apk --no-cache del \
binutils \
curl \
&& rm glibc-${GLIBC_VER}.apk \
&& rm glibc-bin-${GLIBC_VER}.apk \
&& rm -rf /var/cache/apk/*
RUN apk add docker
RUN aws --version && docker --version

change gradle Dockerfile to be executed as root user

I am working in gitlab and want to use gradle to build my java project, but I ran into this bug with gitlab runner: https://gitlab.com/gitlab-org/gitlab-runner/issues/2570
One comment is: I can confirm that it works in v9.1.3 but v9.2.0 is broken. Only when I use root user inside container I can proceed. That really should be fixed, because that this regression is seriously impacting security.
So my question is on which places I have to change the Dockerfile to execute as root user? https://github.com/keeganwitt/docker-gradle/blob/b0419babd3271f6c8e554fbc8bbd8dc909936763/jdk8-alpine/Dockerfile
So my idea is to change the dockerfile that it is executed as root push it to my registry and use it inside gitlab. But I am not so much into linux/docker that I know where the user is defined in the file. Maybe I am totally wrong?
build_java:
image: gradle:4.4.1-jdk8-alpine-root
stage: build_java
script:
- gradle build
artifacts:
expire_in: 1 hour # Workaround to delete artifacts after build, we only artifacts it to keep it between stages (but not after the build)
paths:
- build/
- .gradle/
Dockerfile
FROM openjdk:8-jdk-alpine
CMD ["gradle"]
ENV GRADLE_HOME /opt/gradle
ENV GRADLE_VERSION 4.4.1
ARG GRADLE_DOWNLOAD_SHA256=e7cf7d1853dfc30c1c44f571d3919eeeedef002823b66b6a988d27e919686389
RUN set -o errexit -o nounset \
&& echo "Installing build dependencies" \
&& apk add --no-cache --virtual .build-deps \
ca-certificates \
openssl \
unzip \
\
&& echo "Downloading Gradle" \
&& wget -O gradle.zip "https://services.gradle.org/distributions/gradle-${GRADLE_VERSION}-bin.zip" \
\
&& echo "Checking download hash" \
&& echo "${GRADLE_DOWNLOAD_SHA256} *gradle.zip" | sha256sum -c - \
\
&& echo "Installing Gradle" \
&& unzip gradle.zip \
&& rm gradle.zip \
&& mkdir /opt \
&& mv "gradle-${GRADLE_VERSION}" "${GRADLE_HOME}/" \
&& ln -s "${GRADLE_HOME}/bin/gradle" /usr/bin/gradle \
\
&& apk del .build-deps \
\
&& echo "Adding gradle user and group" \
&& addgroup -S -g 1000 gradle \
&& adduser -D -S -G gradle -u 1000 -s /bin/ash gradle \
&& mkdir /home/gradle/.gradle \
&& chown -R gradle:gradle /home/gradle \
\
&& echo "Symlinking root Gradle cache to gradle Gradle cache" \
&& ln -s /home/gradle/.gradle /root/.gradle
# Create Gradle volume
USER gradle
VOLUME "/home/gradle/.gradle"
WORKDIR /home/gradle
RUN set -o errexit -o nounset \
&& echo "Testing Gradle installation" \
&& gradle --version
EDIT:
Okay how to use gradle in docker after it is downloaded as image and available in gitlab.
build_java:
image: docker:dind
stage: build_java
script:
- docker images
- docker login -u _json_key -p "$(echo $GCR_SERVICE_ACCOUNT | base64 -d)" https://eu.gcr.io
- docker pull eu.gcr.io/test/gradle:4.4.1-jdk8-alpine-root
- docker images
- ??WHAT COMMAND TO CALL GRADLE BUILD??

Resources