Parameterization issue in dockerfile - docker

I am looking to parameterize something like this API${version}Name.jmx.
So that version will be changed with value I pass during runtime of image.
So if I pass "5" during runtime it should replace with "API5Name.jmx" but now it is getting replaced as "APIName.jmx".
I tried with ${} and $ but the value is not getting replaced.
Is there any way to do this?
Please guide me on this.
Below is my sh file to run docker:
environmentName=$1
threadUser=$2
rampUpPeriod=$3
loopCount=$4
version=$5
docker build --no-cache=true -t jmeterimage -f Dockerfile.txt .
docker run -d --name jmimages -e "ENVIRONMENTNAME=${environmentName}" -e "THREADUSERS=${threadUser}" -e "RAMPUPPERIOD=${rampUpPeriod}" -e "LOOPCOUNT=${loopCount}" -e "VERSION=${version}" -t jmeterimage
Below is the dockerfile:
FROM alpine:3.12
ARG environmentName
ARG threadUsers
ARG rampUpPeriod
ARG loopCount
ARG version
ARG JMETER_VERSION="5.4.1"
ENV JMETER_HOME /opt/apache-jmeter-${JMETER_VERSION}
ENV JMETER_BIN ${JMETER_HOME}/bin
ENV JMETER_DOWNLOAD_URL https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-${JMETER_VERSION}.tgz
# ENV ENVIRONMENTNAME=${environmentName}
# ENV THREADUSERS=${threadUsers}
# ENV RAMPUPPERIOD=${rampUpPeriod}
# ENV LOOPCOUNT=${loopCount}
# Install extra packages
# Set TimeZone, See: https://github.com/gliderlabs/docker-alpine/issues/136#issuecomment-612751142
ARG TZ="Asia/Kolkata"
ENV TZ ${TZ}
RUN apk update \
&& apk upgrade \
&& apk add ca-certificates \
&& update-ca-certificates \
&& apk add --update openjdk8-jre tzdata curl unzip bash \
&& apk add --no-cache nss \
&& rm -rf /var/cache/apk/* \
&& mkdir -p /tmp/dependencies \
&& curl -L --silent ${JMETER_DOWNLOAD_URL} > /tmp/dependencies/apache-jmeter-${JMETER_VERSION}.tgz \
&& mkdir -p /opt \
&& tar -xzf /tmp/dependencies/apache-jmeter-${JMETER_VERSION}.tgz -C /opt \
&& rm -rf /tmp/dependencies
# TODO: plugins (later)
# && unzip -oq "/tmp/dependencies/JMeterPlugins-*.zip" -d $JMETER_HOME
# Set global PATH such that "jmeter" command is found
ENV PATH $PATH:$JMETER_BIN
# Entrypoint has same signature as "jmeter" command
# RUN echo ${ENVIRONMENTNAME}
# RUN echo ${THREADUSERS}
# RUN echo ${THREADUSERS}
# RUN echo ${LOOPCOUNT}
WORKDIR ${JMETER_HOME}
CMD sh /opt/apache-jmeter-5.4.1/bin/entrypoint.sh && jmeter -JCookieManager.save.cookies=true -JEnvName=${ENVIRONMENTNAME} -JthreadUsers=${THREADUSERS} -JrampUpPeriod=${RAMPUPPERIOD} -JloopCount=${LOOPCOUNT} -n -t /opt/apache-jmeter-5.4.1/bin/API${VERSION}Name.jmx -l /opt/apache-jmeter-5.4.1/bin/demoresults.jtl -e -o /opt/apache-jmeter-5.4.1/bin/demoresults -f

Related

Run 'opentsdb' image as non-root

I'm trying to build a custom image of opentsdb to run as non-root user. Our k8s clusters have security policies that doesn't allow containers to run as root. I'm utilizing an existing Dockerfile from here https://hub.docker.com/r/petergrace/opentsdb-docker/dockerfile
Below is my Docker file where I have added extra step to create a new user 'opentsdb' and at the end running it as USER 'opentsdb'
FROM alpine:latest
ENV TINI_VERSION v0.18.0
ENV TSDB_VERSION 2.4.0
ENV HBASE_VERSION 1.4.4
ENV GNUPLOT_VERSION 5.2.4
ENV JAVA_HOME /usr/lib/jvm/java-1.8-openjdk
ENV PATH $PATH:/usr/lib/jvm/java-1.8-openjdk/bin/
ENV ALPINE_PACKAGES "rsyslog bash openjdk8 make wget libgd libpng libjpeg libwebp libjpeg-turbo cairo pango lua"
ENV BUILD_PACKAGES "build-base autoconf automake git python3-dev cairo-dev pango-dev gd-dev lua-dev readline-dev libpng-dev libjpeg-turbo-dev libwebp-dev sed"
ENV HBASE_OPTS "-XX:+UseConcMarkSweepGC -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap"
ENV JVMARGS "-XX:+UseConcMarkSweepGC -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -enableassertions -enablesystemassertions"
RUN addgroup opentsdb && adduser -D -u 100 -G opentsdb opentsdb
# Tini is a tiny init that helps when a container is being culled to stop things nicely
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini-static-amd64 /tini
RUN chmod +x /tini
ENTRYPOINT ["/tini", "--"]
# Add the base packages we'll need
RUN apk --update add apk-tools \
&& apk add ${ALPINE_PACKAGES} \
# repo required for gnuplot \
--repository http://dl-cdn.alpinelinux.org/alpine/v3.0/testing/ \
&& mkdir -p /opt/opentsdb
WORKDIR /opt/opentsdb/
# Add build deps, build opentsdb, and clean up afterwards.
RUN set -ex && apk add --virtual builddeps ${BUILD_PACKAGES}
RUN ln -s /usr/bin/python3 /usr/bin/python
RUN wget --no-check-certificate \
-O v${TSDB_VERSION}.zip \
https://github.com/OpenTSDB/opentsdb/archive/v${TSDB_VERSION}.zip \
&& unzip v${TSDB_VERSION}.zip \
&& rm v${TSDB_VERSION}.zip \
&& cd /opt/opentsdb/opentsdb-${TSDB_VERSION} \
&& echo "tsd.http.request.enable_chunked = true" >> src/opentsdb.conf \
&& echo "tsd.http.request.max_chunk = 1000000" >> src/opentsdb.conf
RUN cd /opt/opentsdb/opentsdb-${TSDB_VERSION} \
&& find . | xargs grep -s central.maven.org | cut -f1 -d : | xargs sed -i "s/http:\/\/central/https:\/\/repo1/g" \
&& find . | xargs grep -s repo1.maven.org | cut -f1 -d : | xargs sed -i "s/http:\/\/repo1/https:\/\/repo1/g" \
&& ./build.sh \
&& cp build-aux/install-sh build/build-aux \
&& cd build \
&& make install \
&& cd / \
&& rm -rf /opt/opentsdb/opentsdb-${TSDB_VERSION}
RUN cd /tmp && \
wget --no-check-certificate https://sourceforge.net/projects/gnuplot/files/gnuplot/${GNUPLOT_VERSION}/gnuplot-${GNUPLOT_VERSION}.tar.gz && \
tar xzf gnuplot-${GNUPLOT_VERSION}.tar.gz && \
cd gnuplot-${GNUPLOT_VERSION} && \
./configure && \
make install && \
cd /tmp && rm -rf /tmp/gnuplot-${GNUPLOT_VERSION} && rm /tmp/gnuplot-${GNUPLOT_VERSION}.tar.gz
RUN apk del builddeps && rm -rf /var/cache/apk/*
#Install HBase and scripts
RUN mkdir -p /data/hbase /root/.profile.d /opt/downloads
WORKDIR /opt/downloads
RUN wget -O hbase-${HBASE_VERSION}.bin.tar.gz http://archive.apache.org/dist/hbase/${HBASE_VERSION}/hbase-${HBASE_VERSION}-bin.tar.gz \
&& tar xzvf hbase-${HBASE_VERSION}.bin.tar.gz \
&& mv hbase-${HBASE_VERSION} /opt/hbase \
&& rm -r /opt/hbase/docs \
&& rm hbase-${HBASE_VERSION}.bin.tar.gz
# Add misc startup files
RUN ln -s /usr/local/share/opentsdb/etc/opentsdb /etc/opentsdb \
&& rm /etc/opentsdb/opentsdb.conf \
&& mkdir /opentsdb-plugins
ADD files/opentsdb.conf /etc/opentsdb/opentsdb.conf.sample
ADD files/hbase-site.xml /opt/hbase/conf/hbase-site.xml.sample
ADD files/start_opentsdb.sh /opt/bin/
ADD files/create_tsdb_tables.sh /opt/bin/
ADD files/start_hbase.sh /opt/bin/
ADD files/entrypoint.sh /entrypoint.sh
# Fix ENV variables in installed scripts
RUN for i in /opt/bin/start_hbase.sh /opt/bin/start_opentsdb.sh /opt/bin/create_tsdb_tables.sh; \
do \
sed -i "s#::JAVA_HOME::#$JAVA_HOME#g; s#::PATH::#$PATH#g; s#::TSDB_VERSION::#$TSDB_VERSION#g;" $i; \
done
RUN echo "export HBASE_OPTS=\"${HBASE_OPTS}\"" >> /opt/hbase/conf/hbase-env.sh
#4242 is tsdb, rest are hbase ports
EXPOSE 60000 60010 60030 4242 16010 16070
USER opentsdb
#HBase is configured to store data in /data/hbase, vol-mount it to persist your data.
VOLUME ["/data/hbase", "/tmp", "/opentsdb-plugins"]
CMD ["/entrypoint.sh"]
however the newly built image is throwing error and says permission denied for /opt/bin/ files. And the opentsdb is not getting deployed correctly.
On local using docker desktop, everything works fine using root, when I run below command
docker run -dp 4242:4242 petergrace/opentsdb-docker
Do i need to use any chown commands too ?
Could you help how to make opentsdb get deployed correctly using uid 100 ? Thanks in advance!

Unable to Copy s3 files inside Docker container

Am new to docker and aws. I am trying to create a Jmeter Image and pass on the JMX script during runtime. For that, i thought copying files from S3 inside a container will be a best fit. So initially i tried to copy the files from s3 to my local host using the below command
aws s3 cp s3://bucketname/sample.jmx .
I was able to download the file successfully into my local system.
After then i have created a docker images with latest AWS CLI installed and tried the same, the message shows "download: s3://bucketname/sample.jmx to current folder " but am not able to see the file.
But on the other hand, i was able to copy the file from docker to S3 using the command
aws s3 cp /tmp/sample.jmx s3://bucketname/
Further details :
Image on - alpine:3.12.4
Credentials - Passed inline with the docker run command like below
docker run -it --rm -e AWS_DEFAULT_REGION='us-east-2' -e AWS_ACCESS_KEY_ID='aaaaaa' -e AWS_SECRET_ACCESS_KEY='dsfssdfds' dockerimage aws s3 cp s3://bucketname/sample.jmx /tmp
Complete Docker file :
FROM alpine:3.12.4
ARG JMETER_VERSION="5.3"
ENV JMETER_HOME /opt/apache-jmeter-${JMETER_VERSION}
ENV JMETER_BIN ${JMETER_HOME}/bin
ENV JMETER_DOWNLOAD_URL https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-${JMETER_VERSION}.tgz
# Install extra packages
# Set TimeZone, See: https://github.com/gliderlabs/docker-alpine/issues/136#issuecomment-612751142
ARG TZ="Europe/Amsterdam"
ENV TZ ${TZ}
RUN apk update \
&& apk upgrade \
&& apk add ca-certificates \
&& update-ca-certificates \
&& apk add --update openjdk8-jre tzdata curl unzip bash \
&& apk add --no-cache nss \
&& rm -rf /var/cache/apk/* \
&& mkdir -p /tmp/dependencies \
&& curl -L --silent ${JMETER_DOWNLOAD_URL} > /tmp/dependencies/apache-jmeter-${JMETER_VERSION}.tgz \
&& mkdir -p /opt \
&& tar -xzf /tmp/dependencies/apache-jmeter-${JMETER_VERSION}.tgz -C /opt \
&& rm -rf /tmp/dependencies
# TODO: plugins (later)
# && unzip -oq "/tmp/dependencies/JMeterPlugins-*.zip" -d $JMETER_HOME
# Set global PATH such that "jmeter" command is found
ENV PATH $PATH:$JMETER_BIN
RUN apk update && \
apk add --no-cache python3 py3-pip\
&& pip3 install --upgrade pip
RUN pip3 --no-cache-dir install --upgrade awscli
ENV PATH $PATH:/usr/bin/aws
CMD ["/bin/bash"]
I would really need some help here.

env var in a docker container cannot be echoed

I've built a docker image containing a number of environment variables, including one called SPARK_HOME. Here is the line from the Dockerfile that declares that env var:
ENV SPARK_HOME="/opt/spark"
When I issue docker run I can see that the env var exists but any reference to it doesn't return anything, as demonstrated in a simple echo:
$ docker run --rm myimage /bin/bash -c "env | grep SPARK_HOME ; echo SPARK_HOME=$SPARK_HOME"
SPARK_HOME=/opt/spark
SPARK_HOME=
$
Am I missing something obvious here? Why can I not refer to the value of an existing env var?
EDIT 1: As requested in the comments the Dockerfile content is included below, below the break.
EDIT 2: Discovered that the var can be referred to if I run the container interactively
$ docker run --rm -it myimage /bin/bash
root#419dd5f13a6f:/tmp# echo $SPARK_HOME
/opt/spark
FROM our.internal.artifact.store/python:3.7-stretch
WORKDIR /tmp
ENV SPARK_VERSION=2.2.1
ENV HADOOP_VERSION=2.8.4
ARG ARTIFACTORY_USER
ARG ARTIFACTORY_ENCRYPTED_PASSWORD
ARG ARTIFACTORY_PATH=our.internal.artifact.store/artifactory/generic-dev/ceng/external-dependencies
ARG SPARK_BINARY_PATH=https://${ARTIFACTORY_PATH}/spark-${SPARK_VERSION}-bin-hadoop2.7.tgz
ARG HADOOP_BINARY_PATH=https://${ARTIFACTORY_PATH}/hadoop-${HADOOP_VERSION}.tar.gz
ADD files/apt-transport-https_1.4.8_amd64.deb /tmp
RUN echo "deb https://username:password#our.internal.artifact.store/artifactory/debian-main-remote stretch main" >/etc/apt/sources.list.d/main.list &&\
echo "deb https://username:password#our.internal.artifact.store/artifactory/maria-db-debian stretch main" >>/etc/apt/sources.list.d/main.list &&\
echo 'Acquire::CompressionTypes::Order:: "gz";' > /etc/apt/apt.conf.d/02update &&\
echo 'Acquire::http::Timeout "10";' > /etc/apt/apt.conf.d/99timeout &&\
echo 'Acquire::ftp::Timeout "10";' >> /etc/apt/apt.conf.d/99timeout &&\
dpkg -i /tmp/apt-transport-https_1.4.8_amd64.deb &&\
apt-get install --allow-unauthenticated -y /tmp/apt-transport-https_1.4.8_amd64.deb &&\
apt-get update --allow-unauthenticated -y -o Dir::Etc::sourcelist="sources.list.d/main.list" -o Dir::Etc::sourceparts="-" -o APT::Get::List-Cleanup="0"
RUN apt-get update && \
apt-get -y install default-jdk
# Detect JAVA_HOME and export in bashrc.
# This will result in something like this being added to /etc/bash.bashrc
# export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
RUN echo export JAVA_HOME="$(readlink -f /usr/bin/java | sed "s:/jre/bin/java::")" >> /etc/bash.bashrc
# Configure Spark-${SPARK_VERSION}
RUN curl --fail -u "${ARTIFACTORY_USER}:${ARTIFACTORY_ENCRYPTED_PASSWORD}" -X GET "${SPARK_BINARY_PATH}" -o /opt/spark-${SPARK_VERSION}-bin-hadoop2.7.tgz \
&& cd /opt \
&& tar -xvzf /opt/spark-${SPARK_VERSION}-bin-hadoop2.7.tgz \
&& rm spark-${SPARK_VERSION}-bin-hadoop2.7.tgz \
&& ln -s spark-${SPARK_VERSION}-bin-hadoop2.7 spark \
&& sed -i '/log4j.rootCategory=INFO, console/c\log4j.rootCategory=CRITICAL, console' /opt/spark/conf/log4j.properties.template \
&& mv /opt/spark/conf/log4j.properties.template /opt/spark/conf/log4j.properties \
&& mkdir /opt/spark-optional-jars/ \
&& mv /opt/spark/conf/spark-defaults.conf.template /opt/spark/conf/spark-defaults.conf \
&& printf "spark.driver.extraClassPath /opt/spark-optional-jars/*\nspark.executor.extraClassPath /opt/spark-optional-jars/*\n">>/opt/spark/conf/spark-defaults.conf \
&& printf "spark.driver.extraJavaOptions -Dderby.system.home=/tmp/derby" >> /opt/spark/conf/spark-defaults.conf
# Configure Hadoop-${HADOOP_VERSION}
RUN curl --fail -u "${ARTIFACTORY_USER}:${ARTIFACTORY_ENCRYPTED_PASSWORD}" -X GET "${HADOOP_BINARY_PATH}" -o /opt/hadoop-${HADOOP_VERSION}.tar.gz \
&& tar -xvzf /opt/hadoop-${HADOOP_VERSION}.tar.gz \
&& rm /opt/hadoop-${HADOOP_VERSION}.tar.gz \
&& ln -s hadoop-${HADOOP_VERSION} hadoop
# Set Environment Variables.
ENV SPARK_HOME="/opt/spark" \
HADOOP_HOME="/opt/hadoop" \
PYSPARK_SUBMIT_ARGS="--master=local[*] pyspark-shell --executor-memory 1g --driver-memory 1g --conf spark.ui.enabled=false spark.executor.extrajavaoptions=-Xmx=1024m" \
PYTHONPATH="/opt/spark/python:/opt/spark/python/lib/py4j-0.10.7-src.zip:$PYTHONPATH" \
PATH="$PATH:/opt/spark/bin:/opt/hadoop/bin" \
PYSPARK_DRIVER_PYTHON="/usr/local/bin/python" \
PYSPARK_PYTHON="/usr/local/bin/python"
# Upgrade pip and setuptools
RUN pip install --index-url https://username:password#our.internal.artifact.store/artifactory/api/pypi/pypi-virtual-all/simple --upgrade pip setuptools
# Install core python packages
RUN pip install --index-url https://username:password#our.internal.artifact.store/artifactory/api/pypi/pypi-virtual-all/simple pipenv
ADD Pipfile /tmp
ADD pysparkdf_helloworld.py /tmp
Ok, contrary to my comment, thats not weird at all.
The issue is just that your local shell already interpolates $SPARK_HOME before sending it to the container, so you're basically calling echo SPARK_HOME=
To fix, just escape the env var in the command: $SPARK_HOME->\$SPARK_HOME
Demo:
$ export SPARK_HOME=foo
$ docker run ... /bin/bash -c "env | grep SPARK_HOME ; echo SPARK_HOME=$SPARK_HOME"
> SPARK_HOME=/opt/spark
> SPARK_HOME=foo

CSV data set config- Jmeter Docker

I am creating a Jmeter docker container. Test inputs are driven from CSV(data set config). What should be filename path that i need set in the script
Given you're creating a JMeter docker container you should be aware where to drop the CSV file. Normally it is recommended to use relative paths to the CSV files in scripts for better maintainability or for distributed testing
So I would suggest using Docker COPY instruction in order to transfer your CSV file to JMeter's "bin" folder and use just filename in the CSV Data Set Config
Given the example Dockerfile from the Make Use of Docker with JMeter - Learn How article:
# 1
FROM alpine:3.6
# 2
LABEL maintainer=”vincenzo.marrazzo#domain.personal>
# 3
ARG JMETER_VERSION="5.0"
# 4
ENV JMETER_HOME /opt/apache-jmeter-${JMETER_VERSION}
ENV JMETER_BIN ${JMETER_HOME}/bin
ENV MIRROR_HOST http://mirrors.ocf.berkeley.edu/apache/jmeter
ENV JMETER_DOWNLOAD_URL ${MIRROR_HOST}/binaries/apache-jmeter-${JMETER_VERSION}.tgz
ENV JMETER_PLUGINS_DOWNLOAD_URL http://repo1.maven.org/maven2/kg/apc
ENV JMETER_PLUGINS_FOLDER ${JMETER_HOME}/lib/ext/
# 5
RUN apk update \
&& apk upgrade \
&& apk add ca-certificates \
&& update-ca-certificates \
&& apk add --update openjdk8-jre tzdata curl unzip bash \
&& cp /usr/share/zoneinfo/Europe/Rome /etc/localtime \
&& echo "Europe/Rome" > /etc/timezone \
&& rm -rf /var/cache/apk/* \
&& mkdir -p /tmp/dependencies \
&& curl -L --silent ${JMETER_DOWNLOAD_URL} > /tmp/dependencies/apache-jmeter-${JMETER_VERSION}.tgz \
&& mkdir -p /opt \
&& tar -xzf /tmp/dependencies/apache-jmeter-${JMETER_VERSION}.tgz -C /opt \
&& rm -rf /tmp/dependencies
# 6
RUN curl -L --silent ${JMETER_PLUGINS_DOWNLOAD_URL}/jmeter-plugins-dummy/0.2/jmeter-plugins-dummy-0.2.jar -o ${JMETER_PLUGINS_FOLDER}/jmeter-plugins-dummy-0.2.jar
RUN curl -L --silent ${JMETER_PLUGINS_DOWNLOAD_URL}/jmeter-plugins-cmn-jmeter/0.5/jmeter-plugins-cmn-jmeter-0.5.jar -o ${JMETER_PLUGINS_FOLDER}/jmeter-plugins-cmn-jmeter-0.5.jar
# 7
ENV PATH $PATH:$JMETER_BIN
# 8
COPY launch.sh /
COPY somefile.csv $JMETER_BIN
#9
WORKDIR ${JMETER_HOME}
#10
ENTRYPOINT ["/launch.sh"]
So this line:
COPY somefile.csv $JMETER_BIN
will transfer your CSV file into "bin" folder of your JMeter installation therefore you will be able to refer it just as somefile.csv
you should set the file path to the path seen from docker that is related to the volume.:
https://docs.docker.com/storage/volumes/#choose-the--v-or---mount-flag
For example:
docker run -v "DIR of machine":"DIR inside docker container"

How can i use volume in my dockerfile for copy the jmeter result in my local

How can I use volume in my dockerfile for copy the JMeter result in my local?
Need to display the result in local, how can I copy the result and paste in local with the help of VOLUME.
For example:- I am saving the JMeter HTML report in my container but after that container is automatically stopped. So someone suggests me to use the docker VOLUME command for RUN the HTML.
FROM alpine
ARG JMETER_VERSION="4.0"
ENV JMETER_HOME /opt/apache-jmeter-${JMETER_VERSION}
ENV JMETER_BIN ${JMETER_HOME}/bin
ENV JMETER_DOWNLOAD_URL https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-${JMETER_VERSION}.tgz
ENV JMETER_PLUGINS_DOWNLOAD_URL http://repo1.maven.org/maven2/kg/apc/jmeter-plugins-functions/2.0/jmeter-plugins-functions-2.0.jar
ENV JMETER_PLUGINS_FOLDER ${JMETER_HOME}/lib/ext/
# Change TimeZone TODO: TZ still is not set!
ARG TZ="Australia/Melbourne"
RUN apk update \
&& apk upgrade \
&& apk add ca-certificates \
&& update-ca-certificates \
&& apk add --update openjdk8-jre tzdata curl unzip bash \
&& rm -rf /var/cache/apk/* \
&& mkdir -p /tmp/dependencies \
&& curl -L --silent ${JMETER_DOWNLOAD_URL} > /tmp/dependencies/apache-jmeter-${JMETER_VERSION}.tgz \
&& mkdir -p /opt \
&& tar -xzf /tmp/dependencies/apache-jmeter-${JMETER_VERSION}.tgz -C /opt \
&& rm -rf /tmp/dependencies
RUN curl -L --silent ${JMETER_PLUGINS_DOWNLOAD_URL}/jmeter-plugins-dummy/0.2/jmeter-plugins-dummy-0.2.jar -o ${JMETER_PLUGINS_FOLDER}/jmeter-plugins-dummy-0.2.jar
RUN curl -L --silent ${JMETER_PLUGINS_DOWNLOAD_URL}/jmeter-plugins-cmn-jmeter/0.5/jmeter-plugins-cmn-jmeter-0.5.jar -o ${JMETER_PLUGINS_FOLDER}/jmeter-plugins-cmn-jmeter-0.5.jar
# TODO: plugins (later)
# && unzip -oq "/tmp/dependencies/JMeterPlugins-*.zip" -d $JMETER_HOME
# Set global PATH such that "jmeter" command is found
ENV PATH $PATH:$JMETER_BIN
ENV URL_PATH=${URL}
WORKDIR ${JMETER_HOME}
#RUN export DATETIME=$(date +%Y%m%d)
RUN mkdir -p /var/www/html/"$(date +%Y%m%d)"
VOLUME /var/www/html/
#Copy the *.jmx file jmeter bin file
COPY Get_Ping_Node_API.jmx ./bin
CMD ./bin/jmeter -n -t ./bin/Get_Ping_Node_API.jmx -l ./bin/result.jtl -e -o ./bin/result_html'
You can use docker volume while you run the docker image.
You can add --volume , -v flag to your docker run command.
docker run -v "HOST DIR":"CONTAINER DIR"

Resources