Jmeter in Docker - docker

I am trying to run Jmeter in Docker. I got Dockerfile and Entrypoint has entrypoint.sh as well added.
ARG JMETER_VERSION="5.2.1"
RUN mkdir /jmeter
WORKDIR /jmeter
RUN apt-get update \
&& apt-get install wget -y \
&& apt-get install openjdk-8-jdk -y \
&& wget https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-5.2.1.tgz \
&& tar -xzf apache-jmeter-5.2.1.tgz \
&& rm apache-jmeter-5.2.1.tgz
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64
RUN export JAVA_HOME
RUN echo $JAVA_HOME
ENV JMETER jmeter/apache-jmeter-5.2.1/bin
ENV PATH $PATH:$JMETER_BIN
RUN export JMETER
RUN echo $JMETER
WORKDIR /jmeter/apache-jmeter-5.2.1
COPY users.jmx /jmeter/apache-jmeter-5.2.1
COPY entrypoint.sh /jmeter/apache-jmeter-5.2.1
RUN ["chmod", "+x", "entrypoint.sh"]
ENTRYPOINT ["./entrypoint.sh"]
entrypoint.sh
#!/bin/bash
# Inspired from https://github.com/hhcordero/docker-jmeter-client
# Basically runs jmeter, assuming the PATH is set to point to JMeter bin-dir (see Dockerfile)
#
# This script expects the standdard JMeter command parameters.
#
set -e
freeMem=`awk '/MemFree/ { print int($2/1024) }' /proc/meminfo`
s=$(($freeMem/10*8))
x=$(($freeMem/10*8))
n=$(($freeMem/10*2))
export JVM_ARGS="-Xmn${n}m -Xms${s}m -Xmx${x}m"
echo "START Running Jmeter on `date`"
echo "JVM_ARGS=${JVM_ARGS}"
echo "jmeter args=$#"
# Keep entrypoint simple: we must pass the standard JMeter arguments
bin/jmeter.sh $#
echo "END Running Jmeter on `date`"
Now when I try to run container without jmeter arguments, container starts and asks for jmeter arguments
docker run sar/test12
I get error as An error occurred:
No X11 DISPLAY variable was set, but this program performed an operation which requires it.
But when i run jmeter container with arguments
docker run -v /home/jmeter/unbuntjmeter/:/jmeter/apache-jmeter-5.2.1 sar/test12 -n -t ./users.jmx
docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: "./entrypoint.sh": permission denied": unknown.

Solutions
For the X11 issue, you can try setting -e DISPLAY=$DISPLAY in your docker run, you may need to perform some other steps to get it working properly depending on how your host is setup. But trying to get the GUI working here seems like overkill. To fix your problem when you pass through the command arguments, you can either:
Add execute permissions to the entrypoint.sh file on your host by running chmod +x /home/jmeter/unbuntjmeter/entrypoint.sh.
Or
Don't mount /home/jmeter/unbuntjmeter/ into the container by removing the -v argument from your docker run command.
Problem
When you run this command docker run -v /home/jmeter/unbuntjmeter/:/jmeter/apache-jmeter-5.2.1 sar/test12 -n -t ./users.jmx, you are mounting the directory /home/jmeter/unbuntjmeter/ from your host machine onto /jmeter/apache-jmeter-5.2.1 in your docker container.
That means your /jmeter/apache-jmeter-5.2.1/entrypoint.sh script in the container is being overwritten by the one in that directory on your host (if there is one, which there does seem to be). This file on your host machine doesn't have the proper permissions to be executed in your container (presumably it just needs +x because you are running this in your build: RUN ["chmod", "+x", "entrypoint.sh"]).

Related

cannot access and run a script in docker container

I'm fuzzing php inside docker container. Before fuzzing process start, it download and build php from source, then run fuzzing job. I'm using -d to run container in background.
sudo docker run --name=fuzzphp -d -v ~/crashfile:/fuzzer/script/fuzzing/ --privileged --cap-add=SYS_PTRACE fuzzphp /bin/bash -c "./tool/autophp.sh"
Here is the autophp.sh script doing build and run php.
./script/build/buildphp.sh
./script/fuzzing/runphp.sh
Here is my Dockerfile
FROM ubuntu:16.04
#install required package
RUN mkdir -p /fuzzer
WORKDIR /fuzzer
COPY . /fuzzer
ENV PATH "/fuzzer/clang+llvm/bin:$PATH"
ENV LD_LIBRARY_PATH "/fuzzer/clang+llvm/lib:$LD_LIBRARY_PATH"
RUN ["chmod" "777" "-R" "script/"]
RUN tool/install_fuzzer.sh
The problem is after I run autophp.sh by checking sudo docker logs fuzzphp only build script success, and runphp.sh return an error
chmod: cannot access '/script/fuzzing/buildphp.sh': No such file or directory
./tool/autophp.sh: line 5: ./script/fuzzing/runphp.sh: No such file or directory
I can confirm /script/fuzzing/runphp.sh file is exist on host but I don't have idea only buildphp is executed, but not runphp. What's wrong here?

Docker: Cannot execute binary file

I can't run any binary in my docker container.
Dockerfile:
FROM ubuntu:eoan AS compiler-build
RUN apt-get update && \
dpkg --add-architecture i386 && \
apt-get install -y gcc \
gcc-multilib \
make \
cmake \
git \
python3.8 \
bash
WORKDIR /home
ADD . /home/pawn
RUN mkdir build
WORKDIR /home/build
ENTRYPOINT ["/bin/bash"]
CMD ["/bin/bash"]
I can't even use file builtin:
[root#LAPTOP-EJ5BH6DJ compiler]:~/dev/private/SAMP/compiler (v13.11.0) (master) dc run compiler file bash
/usr/bin/file: /usr/bin/file: cannot execute binary file
From this forum thread:
This error occurs when you use a shell in your entrypoint without the "-c" argument
So, if you change your Dockerfile to end with
ENTRYPOINT [ "/bin/bash", "-l", "-c" ]
then you can run binary files.
Note the purpose of the options for /bin/bash, from the manpage:
-l: Make bash act as if it had been invoked as a login shell
-c: If the -c option is present, then commands are read from the first non-option argument command_string. If there are arguments after the command_string, the first argument is assigned to $0 and any remaining arguments are assigned to the positional parameters. The assignment to $0 sets the name of the shell, which is used in warning and error messages.
Additionally, this article is a worthwhile read on how to use both ENTRYPOINT and CMD together, and what their differences are.
EDIT: Here's another article that goes into a trivial (but clearer than the first article) example using the echo shell builtin.
EDIT: Here's an adaptation of the trivial example from the second article I linked:
FROM ubuntu
ENTRYPOINT [ "/bin/bash", "-l", "-c" ]
CMD [ "ls" ]
$ docker build -t test .
$ docker run --rm test
bin
boot
...
var
$ docker run --rm test "ls etc"
adduser.conf
alternatives
apt
...
update-motd.d
xattr.conf
Note the " around ls /etc. Without the quotes, the argument /etc doesn't seem to be passed to the ls command as I might expect.
Entrypoint can't point to /bin/bash it seems. Removing
ENTRYPOINT ["/bin/bash"] is enough to make it work.
I hit the same error. Unlike the other answers, my error was related to my docker run parameters:
# failed
docker run -it $(pwd | xargs basename):latest bash
# worked
docker run -it $(pwd | xargs basename):latest
I didn't need to add bash as I already had this in my Dockerfile:
ENTRYPOINT ["/bin/bash"]
There are times when we don't have control over the image's Dockerfile but it's original entrypoint has an issue.
We could overwrite it's entrypoint to debug issues:
# example
docker run --rm \
--entrypoint /bin/bash \
-it apache/spark-py:v3.3.0

How to pass arguments to the docker run command dynamically

I have a docker file like this and I have to pass the arguments to docker run command dynamically
FROM ubuntu:14.04
ENV IRONHIDE_SOURCE /var/tmp/ironhide-setup
RUN apt-get update && apt-get install -y openssh-server supervisor cron syslog-ng-core logrotate libapr1 libaprutil1 liblog4cxx10 libxml2 psmisc xsltproc ntp
RUN sed -i -E 's/^(\s*)system\(\);/\1unix-stream("\/dev\/log");/' /etc/syslog-ng/syslog-ng.conf
ADD ironhide-setup/supervisord.conf /etc/supervisor/conf.d/supervisord.conf
RUN mkdir -p /var/log/supervisor & mkdir -p /opt/ibm/
COPY /ironhide-setup/etc/cron.d/* /etc/cron.d
ADD ironhide-setup $IRONHIDE_SOURCE
ENV JAVA_HOME /usr/java/default
ENV PATH $JAVA_HOME/bin:$PATH
ENV IRONHIDE_ROOT /usr/ironhide
ENV LD_LIBRARY_PATH /usr/ironhide/lib
ENV IH_ROOT /usr/ironhide
ENV IRONHIDE_BACKUP_PATH /var/tmp/ironhide-backup
ENV PATH $IH_ROOT/bin:$PATH
RUN echo 'PS1="[AppConnect-Container#\h \w]: "' >> ~/.bashrc
CMD ["/usr/bin/supervisord"]
and my supervisord.conf is this
[supervisord]
nodaemon=true
[program:cron]
command = cron -f -L 15
priority=1
[program:syslog-ng]
command=/usr/sbin/syslog-ng -F -p /var/run/syslog-ng.pid --no-caps
[program:InstallCastIron]
command = %(ENV_IRONHIDE_SOURCE)s/scripts/var_setup
priority=2
I have to pass the arguments to "docker run" command so internally one of the script under scripts location should be using the argument when the docker container comes up.
Please let me know how can I do this and how to achieve" this
To achieve this feat, you will need to use environment variables.
First you will need to make sure that the service you want to pass arguments to consumes those environment variables.
Second you will need to have those variables defined in your dockerfile. For example:-
Third make sure you use entrypoint script For example:-
Last you can use the docker run -e DEFINE_THOSE_VARS=<value>. Or alternatively you can use docker-compose like this
You can traverse through my repo here which achieves this feat.
Please feel free to ask any Question.
Cheers!

Docker build and run with Miniconda environments on Ubuntu host

I am in the process of creating a docker container which has a miniconda environment setup with some packages (pip and conda). Dockerfile :
# Use an official Miniconda runtime as a parent image
FROM continuumio/miniconda3
# Create the conda environment.
# RUN conda create -n dev_env Python=3.6
RUN conda update conda -y \
&& conda create -y -n dev_env Python=3.6 pip
ENV PATH /opt/conda/envs/dev_env/bin:$PATH
RUN /bin/bash -c "source activate dev_env" \
&& pip install azure-cli \
&& conda install -y nb_conda
The behavior I want is that when the container is launched, it should automatically switch to the "dev_env" conda environment but I haven't been able to get this to work. Logs :
dparkar#mymachine:~/src/dev/setupsdk$ docker build .
Sending build context to Docker daemon 2.56kB
Step 1/4 : FROM continuumio/miniconda3
---> 1284db959d5d
Step 2/4 : RUN conda update conda -y && conda create -y -n dev_env Python=3.6 pip
---> Using cache
---> cb2313f4d8a8
Step 3/4 : ENV PATH /opt/conda/envs/dev_env/bin:$PATH
---> Using cache
---> 320d4fd2b964
Step 4/4 : RUN /bin/bash -c "source activate dev_env" && pip install azure-cli && conda install -y nb_conda
---> Using cache
---> 3c0299dfbe57
Successfully built 3c0299dfbe57
dparkar#mymachine:~/src/dev/setupsdk$ docker run -it 3c0299dfbe57
(base) root#3db861098892:/# source activate dev_env
(dev_env) root#3db861098892:/# exit
exit
dparkar#mymachine:~/src/dev/setupsdk$ docker run -it 3c0299dfbe57 source activate dev_env
[FATAL tini (7)] exec source failed: No such file or directory
dparkar#mymachine:~/src/dev/setupsdk$ docker run -it 3c0299dfbe57 /bin/bash source activate dev_env
/bin/bash: source: No such file or directory
dparkar#mymachine:~/src/dev/setupsdk$ docker run -it 3c0299dfbe57 /bin/bash "source activate dev_env"
/bin/bash: source activate dev_env: No such file or directory
dparkar#mymachine:~/src/dev/setupsdk$ docker run -it 3c0299dfbe57 /bin/bash -c "source activate dev_env"
dparkar#mymachine:~/src/dev/setupsdk$
As you can see above, when I am within the container, I can successfully run "source activate dev_env" and the environment switches over. But I want this to happen automatically when the container is launched.
This also happens in the Dockerfile during build time. Again, I am not sure if that has any effect either.
You should use the command CMD for anything related to runtime.
Anything typed after RUN will only be run at image creation time, not when you actually run the container.
The shell used to run such commands is closed at the end of the image creation process, making the environment activation non-persistent in that case.
As such, your additional line might look like this:
CMD ["conda activate <your-env-name> && <other commands>"]
where <other commands> are other commands you might need at runtime after the environment activation.
This docker build file worked for me.
# start with miniconda image
FROM continuumio/miniconda3
# setting the working directory
WORKDIR /usr/src/app
# Copy the file from your host to your current location in container
COPY . /usr/src/app
# Run the command inside your image filesystem to create an environment and name it in the requirements.yml file, in this case "myenv"
RUN conda env create --file requirements.yml
# Activate the environment named "myenv" with shell command
SHELL ["conda", "run", "-n", "myenv", "/bin/bash", "-c"]
# Make sure the environment is activated by testing if you can import flask or any other package you have in your requirements.yml file
RUN echo "Make sure flask is installed:"
RUN python -c "import flask"
# exposing port 8050 for interaction with local host
EXPOSE 8050
#Run your application in the new "myenv" environment
CMD ["conda", "run", "-n", "myenv", "python", "app.py"]

Modifying a Docker container's ENTRYPOINT to run a script before the CMD

I'd like to run a script to attach a network drive every time I create a container in Docker. From what I've read this should be possible by setting a custom entrypoint. Here's what I have so far:
FROM ubuntu
COPY *.py /opt/package/my_code
RUN mkdir /efs && \
apt-get install nfs-common -y && \
echo "#!/bin/sh" > /root/startup.sh && \
echo "mount -t nfs4 -o net.nfs.com:/ /nfs" >> /root/startup.sh && \
echo "/bin/sh -c '$1'" >> /root/startup.sh && \
chmod +x /root/startup.sh
WORKDIR /opt/package
ENV PYTHONPATH /opt/package
ENTRYPOINT ["/root/startup.sh"]
At the moment my CMD is not getting passed through properly to my /bin/sh line, but I'm wondering if there isn't an easier way to accomplish this?
Unfortunately I don't have control over how my containers will be created. This means I can't simply prepend the network mounting command to the original docker command.
From documentation:
CMD should be used as a way of defining default arguments for an ENTRYPOINT command or for executing an ad-hoc command in a container
So if you have an ENTRYPOINT specified, the CMD will be passed as additional arguments for it. It means that your entrypoint script should explicitly handle these arguments.
In your case, when you run :
docker run yourimage yourcommand
What is executed in your container is :
/root/startup.sh yourcommand
The solution is to add exec "$#" at the end of your /root/startup.sh script. This way, it will execute any command given as its arguments.
You might want to read about the ENTRYPOINT mechanisms and its interaction with CMD.

Resources