Not able to start jmeter using docker in distributed mode - docker

I am trying to have a distributed load testing POC using Jmeter. I have followed tutorial mentioned in this medium article - link
The repo for the code is here - https://github.com/vepo/jmeter-docker
Since the Jmeter version used in the tutorial link is 3.3, I changed the Dockerfile inside jmeter-base to pull the most recent version 5.5.1
New Dockerfile inside jmeter-base :
FROM java:8
RUN mkdir /jmeter \
&& cd /jmeter/ \
&& wget http://mirrors.estointernet.in/apache//jmeter/source/apache-jmeter-5.1.1_src.tgz \
&& tar -xvzf apache-jmeter-5.1.1_src.tgz \
&& rm apache-jmeter-5.1.1_src.tgz
ENV JMETER_HOME /jmeter/apache-jmeter-5.1.1/
# Add Jmeter to the Path
ENV PATH $JMETER_HOME/bin:$PATH
I have not made any other changes in the dockerfile.
As per the readme, when I run the command ./exec-jmeter.sh 4 (4 being the count of slaves), I keep getting this error
/bin/bash: ../bin/jmeter: No such file or directory
I tried with the full path like -
../jmeter/apache-jmeter-5.1.1/bin/jmeter, and also ../jmeter/bin/jmeter but I still keep getting the same error.
What am I doing wrong here.

You are downloading the JMeter source, not the JMeter build tar.gz.
Now I had updated the repo with the JMeter 5.1.1, but the Test Plan is not compatible anymore.

Related

Source To Image not finding C++ libs

I am creating a custom Builder Image using S2i dotnet core. This will run in OpenShift linux container
I have modified the custom builder image and included few lines to copy few dlls and ".so" files
When running the container in OpenShift I am facing the below error
error says
"unable to load shared library 'CustomCppWrapper' or one of its dependencies. In order to help diagnose loading problems,
consider setting the LD_DEBUG environment variable: libWrapperName: cannot open shared object file: No such file or directory"
I have set the LD_DEBUG environment variable and found below few errors
/lib64/libstdc++.so.6: error: version lookup error: version `CXXABI_1.3.8' not found (required by /opt/app-root/app/libCWrappeNamer.so) (fatal)
/lib64/libstdc++.so.6: error: version lookup error: version `CXXABI_1.3.8' not found (required by ./libCWrappeNamer.so) (fatal)
I did below command and found below
ldd libCWrappeNamer.so
./libCWrappeNamer.so: /lib64/libstdc++.so.6: version `CXXABI_1.3.8' not found (required by ./libCWrappeNamer.so)
./libCWrappeNamer.so: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by /ab/sdk/customlib/gcc540/lib/libabc.so)
./libCWrappeNamer.so: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by /ab/sdk/customlib/gcc540/lib/libxmlc.so)
Below is my Custom Docker file builder image
FROM dotnet/dotnet-31-runtime-rhel7
# This image provides a .NET Core 3.1 environment you can use to run your .NET
# applications.
ENV PATH=/opt/app-root/src/.local/bin:/opt/app-root/src/bin:/opt/app-root/node_modules/.bin:${PATH} \
STI_SCRIPTS_PATH=/usr/libexec/s2i
LABEL io.k8s.description="Platform for building and running .NET Core 3.1 applications" \
io.openshift.tags="builder,.net,dotnet,dotnetcore,rh-dotnet31"
# Labels consumed by Red Hat build service
LABEL name="dotnet/dotnet-31-rhel7" \
com.redhat.component="rh-dotnet31-container" \
version="3.1" \
release="1" \
architecture="x86_64"
#-------------------------- COPY CPP LIBS
COPY CustomCppWrapper.lib /opt/app-root/app
COPY libCWrappeNamer.so /opt/app-root/app
#----------------------------------
# Labels consumed by Eclipse JBoss OpenShift plugin
LABEL com.redhat.dev-mode="DEV_MODE:false" \
com.redhat.deployments-dir="/opt/app-root/src"
# Switch to root for package installs
USER 0
# Copy the S2I scripts from the specific language image to $STI_SCRIPTS_PATH.
COPY ./s2i/bin/ /usr/libexec/s2i
RUN INSTALL_PKGS="rh-nodejs10-npm rh-nodejs10-nodejs-nodemon rh-dotnet31-dotnet-sdk-3.1 rsync" && \
yum install -y --setopt=tsflags=nodocs --disablerepo=\* \
--enablerepo=rhel-7-server-rpms,rhel-server-rhscl-7-rpms,rhel-7-server-dotnet-rpms \
$INSTALL_PKGS && \
rpm -V $INSTALL_PKGS && \
yum clean all -y && \
# yum cache files may still exist (and quite large in size)
rm -rf /var/cache/yum/*
# Directory with the sources is set as the working directory.
RUN mkdir /opt/app-root/src
WORKDIR /opt/app-root/src
# Trigger first time actions.
RUN scl enable rh-dotnet31 'dotnet help'
# Build the container tool.
RUN /usr/libexec/s2i/container-tool build-tool
# Since $HOME is set to /opt/app-root, the yum install may have created config
# directories (such as ~/.pki/nssdb) there. These will be owned by root and can
# cause actions that work on all of /opt/app-root to fail. So we need to fix
# the permissions on those too.
RUN chown -R 1001:0 /opt/app-root && fix-permissions /opt/app-root
ENV ENABLED_COLLECTIONS="$ENABLED_COLLECTIONS rh-nodejs10" \
# Needed for the `dotnet watch` to detect changes in a container.
DOTNET_USE_POLLING_FILE_WATCHER=true
# Run container by default as user with id 1001 (default)
USER 1001
# Set the default CMD to print the usage of the language image.
CMD /usr/libexec/s2i/usage
Your code depends on libstdc++.so.6 but it would seem that version isn't installed
In your Dockerfile, add the yum install command that should do it. It would depend on what operating system you're using, but for RHEL 7, for example, you could do:
RUN yum install -y libstdc++
With more details of the operating system I can give a more specific command
In this specific examples the Dockerfile could look something like this:
FROM centos:7
RUN yum install -y libstdc++
CMD ["/bin/bash"]

Forked docker image not building

I am trying to fork this docker image so that if anything changes on the original it won't affect me.
I have forked the repo corresponding to that image to my own repo.
I have cloned the repo and am trying to build it:
docker build . -t davcal/gcc-cross-x86_64-elf
I am getting this error:
+ cd /usr/local/src
+ ./build-binutils.sh 2.31.1
/bin/sh: 1: ./build-binutils.sh: not found
The command '/bin/sh -c set -x && cd /usr/local/src && ./build-binutils.sh ${BINUTILS_VERSION} && ./build-gcc.sh ${GCC_VERSION}' returned a non-zero code: 127
What makes no sense to me is that if I use the original image, it builds successfully:
FROM randomdude/gcc-cross-x86_64-elf
...
Maybe Docker Hub stores a pre-built image?
How do I fix this?
Note: I am using Windows. This shouldn't make a difference since the error originates within the container.
Edit
I tried patching the Dockerfile to chmod executable permissions to the sh files in case that was causing problems on Windows. Unfortunately, the exact same error occurs.
RUN set -x \
&& chmod +x /usr/local/src/build-binutils.sh \
&& chmod +x /usr/local/src/build-gcc.sh \
&& cd /usr/local/src \
&& ./build-binutils.sh ${BINUTILS_VERSION} \
&& ./build-gcc.sh ${GCC_VERSION}
Edit 2
Following this method, I inspected the container to see if the sh files actually exist. Here is the output.
I ran docker run --rm -it c53693f11514 bash, including the hash of the intermediate container of the previous successful step of the Dockerfile.
This is the output showing that the files do exist:
root#9b8a64ac2090:/# cd usr/local/src
root#9b8a64ac2090:/usr/local/src# ls
binutils-2.31.1 build-binutils.sh build-gcc.sh gcc-8.2.0
From the described symptoms, file exists, is a shell script, and works on other machines, the "file not found" error is most likely from Winidows linefeeds being added to the file. When the Linux kernel processes a shell script, it looks at the first line, the #!/bin/sh or similar, and then finds that interpreter to run the shell script. If that interpreter isn't found, you'll get a "file not found" error.
In this case, the file it's looking for won't be /bin/sh, but instead /bin/sh\r or /bin/sh^M depending on how you want to represent the carriage return character. You can fix that for single files with a tool like dos2unix but in general, you'll want to fix git itself since there are likely other files that have had their linefeeds corrupted. For details on adjusting the behavior of git, see this post.

How to install chromedriver in the docker file for jmeter

I have deployed jmeter in kubernetes by using
https://github.com/kubernauts/jmeter-kubernetes
But I am facing difficulties when I want to integrate selenium webdriver with jmeter. I am able to install selenium packages within the docker using
RUN cd /jmeter/apache-jmeter-$JMETER_VERSION/ && wget -q -O /tmp/jpgc-webdriver-3.3.zip https://jmeter-plugins.org/files/packages/jpgc-webdriver-3.3.zip && unzip -n /tmp/jpgc-webdriver-3.3.zip && rm /tmp/jpgc-webdriver-3.3.zip
But how to install chromedriver within docker. There is no official documentation for jmeter on this and I am new to jmeter. I really appreciate if anyone would guide me on this.
There is no official documentation for JMeter because JMeter doesn't support Selenium
You should look into official documentation for Chromedriver and Docker
Given you were able to come up with RUN directive to download and unpack WebDriver Sampler plugin you should be able to do the same with the Chromedriver, like:
RUN wget -q -O /tmp/chromedriver.zip https://chromedriver.storage.googleapis.com/87.0.4280.20/chromedriver_linux64.zip && unzip /tmp/chromedriver.zip && rm /tmp/chromedriver.zip && mv chromedriver/tmp
You will also need to change the ENTRYPOINT to set webdriver.chrome.driver JMeter system property like -Dwebdriver.chrome.driver=/tmp/chromedriver

Docker proxy config not working for ADD in Dockerfile

I try to write a Dockerfile that adds a file to the image like this:
ADD https://repository.internal/file.zip /tmp/
The repository.internal host is only reachable through a proxy. I provide the proxy configuraton with the --config option but the ADD command seems not to use the proxy and fails.
I know the proxy configuration is correct because I added the line
RUN curl https://repository.internal/file.zip
which is working fine.
Is there any possibility to run the ADD command also with the proxy config?
As per my comments above, I believe this to be something to do with the internal way the Docker build process handles the ADD and RUN commands... I cant find documentation to back this up - so someone with greater internal knowledge may confirm or deny, but makes sense as a RUN command is done in a layer TO the image being built, where as the ADD command is performed and the results of it are baked into the image.
Whichever way this is being handled, you can achieve what you need by moving to the RUN method as follows:
FROM <your base image>
RUN curl https://repository.internal/file.zip >> /tmp/file.zip \
&& cd /tmp \
&& unzip file.zip \
&& rm file.zip
And you will have the files unzipped.
You may need to check if the rm at the end is required - cant remember off the top of my head if the unzip command removes the original zip file.
As you mentioned, this would rely on the curl and unzip packages being available on the image... however you could potentially avoid having these within your final application image by using Docker Multi Stage Builds
Your Dockerfile would then look something like:
FROM <some useful base image> as collector
RUN apt-get install -y curl unzip
RUN mkdir /tmp/files && \
&& curl https://repository.internal/file.zip >> /tmp/files/file.zip \
&& cd /tmp/files \
&& unzip file.zip \
&& rm file.zip
FROM <your final desired base image>
COPY --from=collector /tmp/files /tmp
This would then utilise an image to have curl and unzip in to collect and deal with the extraction of your files without having to install them on your final application image.

Installing Quicklisp libraries in a Docker image

Is there a Dockerfile for installing cl-json (or other Quicklisp library) on Docker? Most installation instructions I've seen require user input on commands with no --noinput flag, making it difficult to install through a Dockerfile.
In addition, many of the instructions appear out of date or reference broken links and non-existent resources. It would be convenient to use a Dockerfile to install it in a consistent way with e.g. Quicklisp.
Here is a possible Dockerfile for an application based on SBCL.
FROM dparnell/minimal-sbcl
RUN sbcl --noinform \
--disable-ldb \
--lose-on-corruption \
--eval "(ql:quickload '(buildapp))" \
--eval '(buildapp:build-buildapp "/bin/buildapp")'
RUN buildapp --load /opt/quicklisp/setup.lisp \
--eval "(ql:quickload '(cl-json))" \
--output bin/executable
CMD executable
I am basing the image on dparnell/minimal-sbcl, which comes with Quicklisp pre-installed.
I then run SBCL once to build buildapp (that could be a separate docker image).
Then, I run buildapp, load quicklisp/setup.lisp and install cl-json. You can load as many dependencies you want with quickload, but I'd recommand defining your own system.asd file and list dependencies there.
https://lispcookbook.github.io/cl-cookbook/testing.html#continuous-integration
In this tutorial we use Gitlab CI with the daewok/lisp-devel Docker image that includes several Lisp implementations and Quicklisp, so we can run a lisp and (ql:quickload "cl-json") right away.

Resources