Docker RUN with multiple commands and if conditions - docker

I do not have much knowledge on docker file. Please help me with below requirement.
I am looking for a docker RUN command as below:
RUN set -ex && \
yum install -y tar gzip && \
<Other set of commands which includes mkdir, curl, tar>
rm -vr properties && \
if [${arg} == "prod"] then rm -v conf/args.properties fi
This is not working and getting error
syntax error: unexpected end of file

It seems to me, that you have missed one or two ;
If statements in shell need to have a ; after the condition if the then is in the same line.
I have added a second ; after the rm statement before fi.
Your code should look like
RUN set -ex && \
yum install -y tar gzip && \
<Other set of commands which includes mkdir, curl, tar>
rm -vr properties && \
if [ ${arg} == "prod" ]; then rm -v conf/args.properties; fi

Related

Docker container doesn't finish with with HTTP POST audio file in JMeter

I am running load test using JMeter, the test sends a POST request with an audio file to the server and receives a response. I chose to go with docker on a Linux VM as moving forward I will need to do Distributed testing and thought it might easier to execute with Docker. When I use 1hr audio file everything seems to work fine except the fact that sometimes Jmeter executes more threads than scheduled. However if I use a larger file like 3h or 5h the container doesn't finish and exit even though I see on the server side that file is done processing for over 10 min. For the task I use modified Dockerfile and image that I found on dockerhub / git hub "justb4/jmeter". The Dockerfile as follows:
# inspired by https://github.com/hauptmedia/docker-jmeter and
# https://github.com/hhcordero/docker-jmeter-server/blob/master/Dockerfile
FROM alpine:3.12
MAINTAINER Just van den Broecke<just#justobjects.nl>
# modified by Weronika Siwak
ARG JMETER_VERSION="5.4.3"
ENV JMETER_HOME /opt/apache-jmeter-${JMETER_VERSION}
ENV JMETER_BIN ${JMETER_HOME}/bin
ENV JMETER_DOWNLOAD_URL https://archive.apache.org/dist/jmeter/binaries/apache-
jmeter-${JMETER_VERSION}.tgz
ENV JMETER_PLUGINS_FOLDER ${JMETER_HOME}/lib/ext/
# Install extra packages
# Set TimeZone, See: https://github.com/gliderlabs/docker-alpine/issues/136#issuecomment-
612751142
ARG TZ="America/Chicago"
ENV TZ ${TZ}
RUN apk update \
&& apk upgrade \
&& apk add ca-certificates \
&& update-ca-certificates \
&& apk add --update openjdk8-jre tzdata curl unzip bash \
&& apk add --no-cache nss \
&& rm -rf /var/cache/apk/* \
&& mkdir -p /tmp/dependencies \
&& curl -L --silent ${JMETER_DOWNLOAD_URL} > /tmp/dependencies/apache-jmeter-${JMETER_VERSION}.tgz \
&& mkdir -p /opt \
&& tar -xzf /tmp/dependencies/apache-jmeter-${JMETER_VERSION}.tgz -C /opt \
&& rm -rf /tmp/dependencies \
&& mkdir -p /opt/apache-jmeter-${JMETER_VERSION}/bin/test-plans \
&& mkdir -p /opt/apache-jmeter-${JMETER_VERSION}/bin/audio
RUN wget https://jmeter-plugins.org/files/packages/jpgc-graphs-basic-2.0.zip \
&& unzip jpgc-graphs-basic-2.0.zip -d ${JMETER_HOME} \
&& rm jpgc-graphs-basic-2.0.zip \
&& wget https://jmeter-plugins.org/files/packages/jpgc-graphs-additional-2.0.zip \
&& unzip -n jpgc-graphs-additional-2.0.zip -d ${JMETER_HOME} \
&& rm jpgc-graphs-additional-2.0.zip \
&& wget https://jmeter-plugins.org/files/packages/jpgc-cmd-2.2.zip \
&& unzip -n jpgc-cmd-2.2.zip -d ${JMETER_HOME} \
&& rm jpgc-cmd-2.2.zip \
&& wget https://jmeter-plugins.org/files/packages/jpgc-casutg-2.10.zip \
&& unzip -n jpgc-casutg-2.10.zip -d ${JMETER_HOME}\
&& rm jpgc-casutg-2.10.zip \
&& wget https://jmeter-plugins.org/files/packages/jpgc-filterresults-2.2.zip \
&& unzip -n jpgc-filterresults-2.2.zip -d ${JMETER_HOME} \
&& rm jpgc-filterresults-2.2.zip \
&& wget https://jmeter-plugins.org/files/packages/jpgc-ggl-2.0.zip \
&& unzip -n jpgc-ggl-2.0.zip -d ${JMETER_HOME}\
&& rm jpgc-ggl-2.0.zip \
&& wget https://jmeter-plugins.org/files/packages/jmeter.pack-listener-1.7.zip \
&& unzip -n jmeter.pack-listener-1.7.zip -d ${JMETER_HOME}\
&& rm jmeter.pack-listener-1.7.zip \
&& wget https://jmeter-plugins.org/files/packages/bzm-parallel-0.11.zip \
&& unzip -n bzm-parallel-0.11.zip -d ${JMETER_HOME}\
&& rm bzm-parallel-0.11.zip \
&& wget https://jmeter-plugins.org/files/packages/jpgc-perfmon-2.1.zip \
&& unzip -n jpgc-perfmon-2.1.zip -d ${JMETER_HOME} \
&& rm jpgc-perfmon-2.1.zip \
&& wget https://jmeter-plugins.org/files/packages/jpgc-synthesis-2.2.zip \
&& unzip -n jpgc-synthesis-2.2.zip -d ${JMETER_HOME} \
&& rm jpgc-synthesis-2.2.zip
# TODO: plugins (later)
# && unzip -oq "/tmp/dependencies/JMeterPlugins-*.zip" -d $JMETER_HOME
# Set global PATH such that "jmeter" command is found
ENV PATH $PATH:$JMETER_BIN
# Entrypoint has same signature as "jmeter" command
COPY entrypoint.sh /
WORKDIR ${JMETER_HOME}
RUN ["chmod", "+x", "/entrypoint.sh"]
ENTRYPOINT [ "/entrypoint.sh"]
The entrypoint.sh:
#!/bin/bash
# Inspired from https://github.com/hhcordero/docker-jmeter-client
# Basically runs jmeter, assuming the PATH is set to point to JMeter bin-dir (see Dockerfile)
#
# This script expects the standdard JMeter command parameters.
#
# Install jmeter plugins available on /plugins volume
if [ -d /plugins ]
then
for plugin in /plugins/*.jar; do
cp $plugin $(pwd)/lib/ext
done;
fi
# Execute JMeter command
set -e
freeMem=`awk '/MemFree/ { print int($2/1024) }' /proc/meminfo`
s=$(($freeMem/10*8))
x=$(($freeMem/10*8))
n=$(($freeMem/10*2))
export JVM_ARGS="-Xmn${n}m -Xms${s}m -Xmx${x}m"
echo "START Running Jmeter on `date`"
echo "JVM_ARGS=${JVM_ARGS}"
echo "jmeter args=$#"
# Keep entrypoint simple: we must pass the standard JMeter arguments
EXTRA_ARGS=-Dlog4j2.formatMsgNoLookups=true
echo "jmeter ALL ARGS=${EXTRA_ARGS} $#"
jmeter ${EXTRA_ARGS} $#
echo "END Running Jmeter on `date`"
# -n \
# -t "/tests/${TEST_DIR}/${TEST_PLAN}.jmx" \
# -l "/tests/${TEST_DIR}/${TEST_PLAN}.jtl"
# exec tail -f jmeter.log
# -D "java.rmi.server.hostname=${IP}" \
# -D "client.rmi.localport=${RMI_PORT}" \
# -R $REMOTE_HOSTS
For tests and results I use volumes, I execute with commands: jmeter -n -t bin/test-plans/1_usr_3_hr_15n.jmx -l /opt/apache-jmeter-5.4.3/bin/results/1_usr_3_hr_15n/1_usr_3_hr_15n.jtl -e -o /opt/apache-jmeter-5.4.3/bin/results/1_usr_3_hr_15n I don't know why it works for 1hr audio but not larger and why it executes more threads than scheduled. The test plan is simple 1 post http request with no loops, 1 thread per second
Docker neither solves the problem of JMeter execution nor makes distributed execution easier (especially if you're running everything at the same physical or virtual machine), it just consumes resources and being yet another layer when errors can occur
If JMeter test execution doesn't finish in the anticipated time I can think of the following reasons and steps to take:
The server fails to respond. By default JMeter waits for the response forever (unless limited by underlying OS or JVM timeouts) so to avoid "hanging" of the test in case when server fails to provide a response I would recommend setting a reasonable timeout, it can be done under "Advanced" tab of the HTTP Request sampler (or better HTTP Request Defaults)
Check jmeter.log file for any suspicious entries
You can log into slave containers and take thread dumps to see what exactly threads are doing and where/why they're stuck

docker images alpine3.14 compile nginx 1.21.1 error,however docker images alpine3.12 is ok,why?

I try to compile docker images from alpine:3.14.
The error is as follows
make -f objs/Makefile
make: make: Operation not permitted
make: *** [Makefile:10: build] Error 127
Then I switched to version alpine:3.12, alpine:3.13 and found that both are OK!
The following is my key problematic code for compiling NGINX based on alpine:3.14 version
# Omit irrelevant code
RUN \
addgroup -S www && adduser www -D -S -s /bin/sh -G www \
&& wget -P /home/soft https://github.com/vozlt/nginx-module-vts/archive/v0.1.18.tar.gz \
&& wget -P /home/soft http://nginx.org/download/nginx-1.21.1.tar.gz \
&& wget -P /home/soft https://ftp.pcre.org/pub/pcre/pcre-8.44.tar.gz \
&& cd /home/soft && tar -zxf nginx-1.21.1.tar.gz && tar -zxf v0.1.18.tar.gz && tar -zxf pcre-8.44.tar.gz \
&& cd /home/soft/nginx-1.21.1 \
&& ./configure --user=www --group=www --prefix=/usr/local/nginx --with-http_stub_status_module --with-http_ssl_module --with-http_v2_module --with-http_gzip_static_module --with-stream --with-http_sub_module --with-pcre=/home/soft/pcre-8.44 --add-module=/home/soft/nginx-module-vts-0.1.18 \
&& make && make install \
# Omit irrelevant code
Error 127 seven could mean two things. Either the the binary for the command is not found in PATH variable (Please take a note here, sometimes it can also happen that the command is found but some library is missing) or the binary doesn't have the execution permissions. By the look of it, I suspect it's because it doesn't have the execution permissions.
I would suggest you to run the alpine:3.14 container and inspect the PATH variable, the make binary location and the permissions.

Micromamba inside Docker container

I have a base Docker image:
FROM ubuntu:21.04
WORKDIR /app
RUN apt-get update && apt-get install -y wget bzip2 \
&& wget -qO- https://micromamba.snakepit.net/api/micromamba/linux-64/latest | tar -xvj bin/micromamba \
&& touch /root/.bashrc \
&& ./bin/micromamba shell init -s bash -p /opt/conda \
&& cp /root/.bashrc /opt/conda/bashrc \
&& apt-get clean autoremove --yes \
&& rm -rf /var/lib/{apt,dpkg,cache,log}
SHELL ["bash", "-l" ,"-c"]
and derive from it another one:
ARG BASE
FROM $BASE
RUN source /opt/conda/bashrc && micromamba activate \
&& micromamba create --file environment.yaml -p /env
While building the second image I get the following error: micromamba: command not found for the RUN section.
If I run 1st base image manually I can launch micromamba, it is running correctly
I can run temporary image which were created for 2nd image building, micromamba is available via CLI, running correctly.
If I inherit from debian:buster, or alpine, for example, it is building perfectly.
What a problem with the Ubuntu? Why it cannot see micromamba during 2nd Docker image building?
PS using scaffold for building, so it can understand correctly, where is $BASE and what is it.
The ubuntu:21.04 image comes with a /root/.bashrc file that begins with:
# ~/.bashrc: executed by bash(1) for non-login shells.
# see /usr/share/doc/bash/examples/startup-files (in the package bash-doc)
# for examples
# If not running interactively, don't do anything
[ -z "$PS1" ] && return
When the second Dockerfile executes RUN source /opt/conda/bashrc, PS1 is not set and thus the remainder of the bashrc file does not execute. The remainder of the bashrc file is where micromamba initialization occurs, including the setup of the micromamba bash function that is used to activate a micromamba environment.
The debian:buster image has a smaller /root/.bashrc that does not have a line similar to [ -z "$PS1" ] && return and therefore the micromamba function gets loaded.
The alpine image does not come with a /root/.bashrc so it also does not contain the code to exit the file early.
If you want to use the ubuntu:21.04 image, you could modify you first Dockerfile like this:
FROM ubuntu:21.04
WORKDIR /app
RUN apt-get update && apt-get install -y wget bzip2 \
&& wget -qO- https://micromamba.snakepit.net/api/micromamba/linux-64/latest | tar -xvj bin/micromamba \
&& touch /root/.bashrc \
&& ./bin/micromamba shell init -s bash -p /opt/conda \
&& grep -v '[ -z "\$PS1" ] && return' /root/.bashrc > /opt/conda/bashrc # this line has been modified \
&& apt-get clean autoremove --yes \
&& rm -rf /var/lib/{apt,dpkg,cache,log}
SHELL ["bash", "-l" ,"-c"]
This will strip out the one line that causes the early termination.
Alternatively, you could make use of the existing mambaorg/micromamba docker image. The mambaorg/micromamba:latest is based on debian:slim, but mambaorg/micromamba:jammy will get you a ubuntu-based image (disclosure: I maintain this image).

bad variable name in Dockerfile

I have this Dockerfile:
FROM ubuntu:18.04
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
RUN export EOSIO_LOCATION=~/eosio/eos \
export EOSIO_INSTALL_LOCATION=$EOSIO_LOCATION/../install \
mkdir -p $EOSIO_INSTALL_LOCATION
RUN git clone https://github.com/EOSIO/eos.git $EOSIO_LOCATION \
cd $EOSIO_LOCATION && git submodule update --init --recursive
ENTRYPOINT ["/bin/bash"]
And error is: /bin/sh: 1: export: -p: bad variable name
How can i fix it?
You currently don't have any separation between the export and mkdir commands in the RUN statement.
You probably want to concatenate the commands with &&. This ensures that the previous commands (only) runs if the prior command succeds. You may also use ; to separate commands, i.e.
RUN export EOSIO_LOCATION=~/eosio/eos && \
export EOSIO_INSTALL_LOCATION=$EOSIO_LOCATION/../install && \
mkdir -p $EOSIO_INSTALL_LOCATION
NOTE You probably don't need to export these variables and could:
EOSIO_LOCATION=... && EOSIO_INSTALL_LOCATION=... && mkdir ...
There's a Dockerfile ENV command that may be preferable:
ENV EOSIO_LOCATION=${PWD}/eosio/eos
ENV EOSIO_INSTALL_LOCATION=${EOSIO_LOCATION}/../install && \
RUN mkdir -p ${EOSIO_INSTALL_LOCATION}
Personal preference is to wrap env vars in ${...} and to use ${PWD} instead of ~ as it feels more explicit.

Docker doesn't find file

I'm working on a project that uses a Docker image for a specific feature, other than that I don't need docker at all so I don't understand much about it. The issue is that Docker doesn't finds a file that is actually in the folder and the build process breaks.
When trying to create the image using docker build -t project/render-worker . the error is this:
Step 18/23 : RUN bin/composer-install && php composer-setup.php --install-dir=/bin && php -r 'unlink("composer-setup.php");' && php /bin/composer.phar global require hirak/prestissimo
---> Running in 695db3bf2f02
/bin/sh: 1: bin/composer-install: not found
The command '/bin/sh -c bin/composer-install && php composer-setup.php --install-dir=/bin && php -r 'unlink("composer-setup.php");' && php /bin/composer.phar global require hirak/prestissimo' returned a non-zero code: 127
As mentioned the file composer-install does exist and this is what's in it:
#!/bin/sh
EXPECTED_SIGNATURE="$(wget -q -O - https://composer.github.io/installer.sig)"
php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');"
ACTUAL_SIGNATURE="$(php -r "echo hash_file('SHA384', 'composer-setup.php');")"
if [ "$EXPECTED_SIGNATURE" != "$ACTUAL_SIGNATURE" ]
then
echo 'ERROR: Invalid installer signature'
rm composer-setup.php
fi
Basically this is to get composer as you can see.
This is the Docker file:
FROM php:7.2-apache
RUN echo 'deb http://ftp.debian.org/debian stretch-backports main' > /etc/apt/sources.list.d/backports.list
RUN apt-get update
RUN apt-get install -y --no-install-recommends \
libpq-dev \
libxml2-dev \
ffmpeg \
imagemagick \
wget \
git \
zlib1g-dev \
libpng-dev \
unzip \
mencoder \
parallel \
ruby-dev
RUN apt-get -t stretch-backports install -y --no-install-recommends \
libav-tools \
&& rm -rf /var/lib/apt/lists/*
RUN docker-php-ext-install \
pcntl \
pdo_pgsql \
pgsql \
soap \
gd \
zip
RUN gem install compass
RUN a2enmod rewrite
ENV APACHE_RUN_USER root
ENV APACHE_RUN_GROUP root
EXPOSE 80
WORKDIR /app
COPY . /app
# Configuring apache to run the symfony app
COPY config/docker/apache.conf /etc/apache2/sites-enabled/000-default.conf
RUN echo "export DATABASE_URL" >> /etc/apache2/envvars \
&& echo ". /etc/environment" >> /etc/apache2/envvars
RUN wget -cqO- https://nodejs.org/dist/v10.15.3/node-v10.15.3-linux-x64.tar.xz | tar -xJ
RUN cp -a node-v10.15.3-linux-x64/bin /usr \
&& cp -a node-v10.15.3-linux-x64/include /usr \
&& cp -a node-v10.15.3-linux-x64/lib /usr \
&& cp -a node-v10.15.3-linux-x64/share /usr/ \
&& rm -rf node-v10.15.3-linux-x64 node-v10.15.3-linux-x64.tar.xz
RUN bin/composer-install \
&& php composer-setup.php --install-dir=/bin \
&& php -r "unlink('composer-setup.php');" \
# Install prestissimo for dramatically faster `composer install`
&& php /bin/composer.phar global require hirak/prestissimo
RUN APP_ENV=prod APP_SECRET= DATABASE_URL= AWS_KEY= AWS_SECRET= AWS_REGION= MEDIA_S3_BUCKET= \
GIPHY_API_KEY= FACEBOOK_APP_ID= FACEBOOK_APP_SECRET= \
GOOGLE_API_KEY= GOOGLE_CLIENT_ID= GOOGLE_CLIENT_SECRET= STRIPE_SECRET_KEY= STRIPE_ENDPOINT_SECRET= \
THEYSAIDSO_API_KEY= REV_CLIENT_API_KEY= REV_USER_API_KEY= REV_API_ENDPOINT= RENDER_QUEUE_URL= \
CLOUDWATCH_LOG_GROUP_NAME= \
php /bin/composer.phar install --no-interaction --no-dev --prefer-dist --optimize-autoloader --no-scripts \
&& php /bin/composer.phar clear-cache
RUN npm install \
&& node_modules/bower/bin/bower install --allow-root \
&& node_modules/grunt/bin/grunt
# Don't allow it to keep logs around; they're emitted on STDOUT and sent to AWS
# CloudWatch from there, so we don't need them on disk filling up the space
RUN mkdir -p var/cache/prod && chmod -R 777 var/cache/prod
RUN mkdir -p var/log && ln -s /dev/null var/log/prod.log \
&& ln -s /dev/null var/log/prod.deprecations.log && chmod -R 777 var/log
CMD ["/usr/bin/env", "bash", "./bin/start_render_worker"]
Like I said, unfortunately I don't have the slightest idea of how docker works and what's going on, just that I need it. I'm running docker in Win10 Pro and to make matters even worst it is actually working for another dev running Win10. We tried a few things but we can't make it work. I tried cloning the repo in other locations with no success at all. Everything before this particular step runs correctly.
[EDIT]
As suggested by the users I ran RUN ls bin/ before the composer install line and this is the result:
Step 18/24 : RUN ls bin/
---> Running in 6cb72090a069
append_captions
capture
composer-install
concat_project_video
console
encode_frames
encode_frames_to_gif
format_video_for_concatenation
generate_meme_bar
image_to_video
install.sh
phpcs
phpunit
process_render_queue
publish_docker_image
run_animation_worker
run_render_worker
run_render_worker_osx
start_render_worker
update
Removing intermediate container 6cb72090a069
As you can see composer-install is there so this is quite baffling.
Also I checked and set the line ending sequence to LF and the result is the same error.
[SECOND EDIT]
I added COPY bin/composer-install /bin
Then RUN ls bin/
And the results are the same. The ls command finds the file but the error persists. Also adding a slash before bin doesn't change anything :(

Resources