Goal
I am using Docker to run JMeter in Azure Devops. I am trying to use Blazemeter's Parallel Controller, which is not native to JMeter. So, according to the justb4/jmeter image documentation, I used the following command to get the image going and run the JMeter test:
docker run --name jmetertest -i -v /home/vsts/work/1/s/plugins:/plugins -v $ROOTPATH:/test -w /test justb4/jmeter ${#:2}
Error
However, it produces the following error while trying to accommodate for the plugin (I know the plugin makes the difference due to testing without the plugin):
cp: can't create '/test/lib/ext': No such file or directory
As far as I understand, this is an error produced when one of the parent directories of the directory you are trying to make does not exist. Is there something I am doing wrong, or is there actually something wrong with the image?
References
For reference, I will include links to the image documentation and the repository.
Image: https://hub.docker.com/r/justb4/jmeter
Repository: https://github.com/justb4/docker-jmeter
Looking into the Dockerfile:
ENV JMETER_HOME /opt/apache-jmeter-${JMETER_VERSION}
Looking into entrypoint.sh
if [ -d /plugins ]
then
for plugin in /plugins/*.jar; do
cp $plugin $(pwd)/lib/ext
done;
fi
It basically copies the plugins from /plugins folder (if it is present) to /lib/ext folder relative to current working directory
I don't know why did you add this stanza -w /test to your command line but it explicitly "tells" the container that local working directory is /test, not /opt/apache-jmeter-xxxx, that's why the script is failing to copy the files.
In general I don't think that the approach is very valid because:
In Azure DevOps you won't have your "local" folder (unless you want to add plugins binaries under the version control system)
Some JMeter Plugins have other .jars as the dependencies so when you're installing the plugin you should:
put the plugin itself under /lib/ext folder of your JMeter installation
put the plugin dependencies under /lib folder of your JMeter installation
So I would recommend amending the Dockerfile, download JMeter Plugins Manager and installed the plugin(s) you need from the command line
Something like:
RUN wget https://jmeter-plugins.org/get/ -O /opt/apache-jmeter-${JMETER_VERSION}/lib/ext/jmeter-plugins-manager.jar
RUN wget https://repo1.maven.org/maven2/kg/apc/cmdrunner/2.2/cmdrunner-2.2.jar -P /opt/apache-jmeter-${JMETER_VERSION}/lib/
RUN java -cp /opt/apache-jmeter-${JMETER_VERSION}/lib/ext/jmeter-plugins-manager.jar org.jmeterplugins.repository.PluginManagerCMDInstaller
RUN /opt/apache-jmeter-${JMETER_VERSION}/bin/./PluginsManagerCMD.sh install bzm-parallel
Related
I illustrated my problem through a diagram to complement my explanations:
To contextualize, I'm working with continuous integration, I have a host machine with jenkins running on that, and then through the script pipeline will execute containers A and B to perform their build (A) and static analysis (B) functions, in that order of execution. The reason to do the build beforehand is so that the compile_commands.json file is generated by MAKE.
The project is IoT(esp32) and I use the official espressif image to build it. The project folder is /var/lib/jenkins/workspace/iot-proj/ and it is on the jenkins machine (host) and I share its contents to containers through volumes. When the pipeline executes the command docker run -v $WORKSPACE/ESPComm:$WORKSPACE/ESPComm -w $WORKSPACE/ESPComm espressif/idf:v4.2.2 idf.py build, a build folder is created and inside it there is the file compile_commands.json, which will be used by cppcheck for static analysis. As this folder is shared across volumes, the compile_commands file later persists and container B imports smoothly.
The problem is that to compile the project there are espressif libraries in the opt/eso/idf/... folder and they are listed in the compile_commands.json and this folder is not accessible to container B when it runs cppcheck. How can I make this folder of espressif libraries that are in container A accessible in container B?
I tried this, but doesn't worked.. even with binfs
https://github.com/moby/moby/issues/26872
sudo ls /proc/$(docker inspect --format {{.State.Pid}} YOUR_CONTAINER_NAME)/root
I had already read that the volume is only from the host to the container, even so I tried the command below and the folder is really empty:
docker run -it -v $PWD:/project -v /opt/esp/idf/components/:/opt/esp/idf/components/ -w /project espressif/idf:v4.2.2
I thought I could easily solve my problem by adding these directories to a .suppressions file so that cppcheck ignores these files. I tried to do as I've done in other projects to pass the full path of the folder to be ignored, but it didn't work, the following error message appears in jenkins:
+ cppcheck --project=/var/lib/jenkins/workspace/iot-proj-TEST/ESPComm/build/compile_commands.json --inline-suppr --enable=all --suppress=missingInclude --suppress=unmatchedSuppression -- suppressions-list=/var/lib/jenkins/workspace/iot-proj-TEST/ESPComm/build/.suppressions --inconclusive --std=c11 --xml --xml-version=2 --std=c11
cppcheck: error: '/opt/esp/idf/components/xtensa/debug_helpers.c' from compilation database does not exist
cppcheck: error: failed to load project '/var/lib/jenkins/workspace/iot-proj-TEST/ESPComm/build/compile_commands.json'. An error occurred.
I am trying to follow the 2 steps mentioned below:
1) Downloaded source code of
https://sourceforge.net/projects/hunspell/files/Hyphen/2.8/hyphen-2.8.8.tar.gz/download
2) Compiled it and you will get binary named example:
hyphen-2.8.8$ ./example ~/dev/smc/hyphenation/hi_IN/hyph_hi_IN.dic
~/hi_sample.text
I have downloaded and uncompressed the tar file. My question is how to create a dockerfile to automate this?
There are only 3 commands involved:
./configure
make all-recursive
make install
I can select the official python image as a base container. But how do I write the commands in a docker file?
You can do that with a RUN command:
FROM python:<version number here>
RUN ./configure && make-recursive && make install
CMD ['<some command here>']
what you use for <some command here> depends on what the image is meant to do. Remember that docker containers only run as long as that command is executing, so if you put the configure/make/install steps in a script and use that as your entry point, it's going to build your program, and then the container will halt.
Also you need to get the downloaded files into the container. That can be done using a COPY or an ADD directive (before the RUN of course). If you have the tar.gz file saved locally, then ADD will both copy the file into the container and expand it into a directory automatically. COPY will not expand it, so if you do that, you'll need to add a tar -zxvf or similar to the RUN.
If you want to download the file directly into the container, that could be done with ADD <source URL>, but in that case it won't expand it, so you'll have to do that in the RUN. COPY doesn't allow sourcing from a URL. This post explains COPY vs ADD in more detail.
You can have the three commands in a shell script and then use the following docker commands
COPY ./<path to your script>/<script-name>.sh /
ENTRYPOINT ["/<script-name>.sh"]
CMD ["run"]
For reference, you can create your docker file as they have created for one of the projects I worked on Apache Artemis Active Mq:
https://github.com/apache/activemq-artemis/blob/master/artemis-docker/Dockerfile-ubuntu
If I build a cmake file, create an executeble with make and delete everything except the executable, the executable is still functional. Can I,
build the file but the only output is the file that can be executed with ./project
or
have all of the files build, create the executable with make, then delete everything except the executable afterwards
and if so, how do I?
If I am getting this correctly, you want to create a stand-alone binary that cannot be executed even if the docker image does not has any dependencies then you need to use static option during the build - i am not expert in this - maybe as described in the following answer of Compiling a static executable with CMake.
Next you might use a multi-stage builds in docker which will makes you able to have a final minimal image with your executable file only without any build dependencies, just the needed packages for your run-time environment. I have an example not with make, it was created using g++ but achieving the similar concept as below:
FROM gcc:5 as builder
COPY ./hello_world_example.cc /hello_world_example.cc
RUN g++ -o hello_world_binary -static hello_world_example.cc && chmod +x hello_world_binary
FROM debian:jessie
COPY --from=builder /hello_world_binary /hello_world_binary
CMD ["/hello_world_binary"]
And the final result when you run the container:
$ docker run --rm -it helloworldimage:latest
Hello from Dockerized image
Why do you need that?
You can add install() command to your CMakeLists.txt and then call make install to copy your executable into CMAKE_INSTALL_PREFIX directory. If you set CMAKE_INSTALL_PREFIX to an empty dir, you'd end with a directory containing only your executable file.
If a Dockerfile is written with mistakes for example:
CMD ["service", "--config", "/etc/service.conf] (missing quote)
Is there a way to lint it to detect such mistake before building?
Try:
Either the Haskell Dockerfile Linter ("hadolint"), also available online. hadolint parses the Dockerfile into an AST and performs checking and validation based on best practice Docker images rules. It also uses Shellcheck to lint the Bash code on RUN commands.
Or dockerlinter (node.js-based).
I've performed a simple test against of a simple Docker file with RUN, ADD, ENV and CMD. dockerlinter was smart about grouping the same violation of rules together but it was not able to inspect as thorough as hadolinter possibly due to the lack of Shellcheck to statically analyze the Bash code.
Although dockerlinter falls short in the scope it can lint, it does seem to be much easier to install. npm install -g dockerlinter will do, while compiling hadolinter requires a Haskell compiler and build environment that takes forever to compile.
$ hadolint ./api/Dockerfile
L9 SC2046 Quote this to prevent word splitting.
L11 SC2046 Quote this to prevent word splitting.
L8 DL3020 Use COPY instead of ADD for files and folders
L10 DL3020 Use COPY instead of ADD for files and folders
L13 DL3020 Use COPY instead of ADD for files and folders
L18 DL3020 Use COPY instead of ADD for files and folders
L21 DL3020 Use COPY instead of ADD for files and folders
L6 DL3008 Pin versions in apt get install. Instead of `apt-get install <package>` use `apt-get install <package>=<version>`
L6 DL3009 Delete the apt-get lists after installing something
L6 DL3015 Avoid additional packages by specifying `--no-install-recommends`
$ dockerlint ./api/Dockerfile
WARN: ADD instruction used instead of COPY on line 8, 10, 13, 18, 21
ERROR: ./api/Dockerfile failed.
Update in 2018. Since hadolint has the official Docker repository now, you can get the executable quickly:
id=$(docker create hadolint/hadolint:latest)
docker cp "$id":/bin/hadolint .
docker rm "$id"
or you can use this command
docker container run --rm -i hadolint/hadolint hadolint - < Dockerfile
This is a statically compiled executable (according to ldd hadolint), so it should run regardless of installed libraries. A reference on how the executable is built: https://github.com/hadolint/hadolint/blob/master/docker/Dockerfile.
If you have a RedHat subscription, you can access the "Linter for Dockerfile" application directly at https://access.redhat.com/labs/linterfordockerfile/; information about the application is located at https://access.redhat.com/labsinfo/linterfordockerfile
This Node.js application is also available on GitHub https://github.com/redhataccess/dockerfile_lint if you prefer to run it locally.
I use very successfully in my CI pipeline npm's dockerfile_lint. You can add or extend rules. Using the package.json you can create different configs for the different jobs. There are both
Docker CLI
docker run -it --rm --privileged -v `pwd`:/root/ \
projectatomic/dockerfile-lint \
dockerfile_lint [-f Dockerfile]
docker run -it --rm --privileged -v `pwd`:/root/ \
-v /var/run/docker.sock:/var/run/docker.sock \
projectatomic/dockerfile-lint \
dockerfile_lint image <imageid>
and Atomic CLI available
atomic run projectatomic/dockerfile-lint
atomic run projectatomic/dockerfile-lint image <imageid>
Also you can lint your images for tagging.
I created dockerfile-validator as an extension for VS Code, which uses the dockerfile-lint mentioned in a previous answer. By default it uses dockerfile-lint default rules, but in VS code User Settings (dockerfile-validator.rulefile.path) you can specify a path to a custom rule file with your own coding standards.
Recently, I cam across dockerfilelint which is NodeJS based.
dockerfilelint Dockerfile
Supports following rules and rudimentary CMD checks
required_params
uppercase_commands
from_first
invalid_line
sudo_usage
apt-get_missing_param
apt-get_recommends
apt-get-upgrade
apt-get-dist-upgrade
apt-get-update_require_install
apkadd-missing_nocache_or_updaterm
apkadd-missing-virtual
invalid_port
invalid_command
expose_host_port
label_invalid
missing_tag
latest_tag
extra_args
missing_args
add_src_invalid
add_dest_invalid
invalid_workdir
invalid_format
apt-get_missing_rm
deprecated_in_1.13
Hadolint seems like a better option but this may suffice for simple needs. Also, Github's super-linter uses this.
I'm not too familiar with go but it looks like you can simply call the Parse method as is done in the test suite here. If that does not return an err then your lint passes. I'm assuming that's trivial to expose to a script or something to call during development.
I have setup an automated build on Docker hub here (the sources are here).
The build goes well locally. I have also tried to rebuild it with --no-cache option:
docker build --no-cache .
And the process completes successfully
Successfully built 68b34a5f493a
However, the automated build fails on Docker hub with this error log:
...
Cloning into 'nerdtree'...
[91mVim: Warning: Output is not to a terminal
[0m
[91mVim: Warning: Input is not from a terminal
[0m
[m[m[0m[H[2J[24;1HError detected while processing command line:
E492: Not an editor command: PluginInstall
E492: Not an editor command: GoInstallBinaries
[91mmv: cannot stat `/go/bin/*': No such file or directory
[0m
This build apparently fails on the following vim command:
vim +PluginInstall +GoInstallBinaries +qall
Note that the warnings Output is not to a terminal and Input is not to a terminal appears also in the local build.
I cannot understand how this can happen. I am using a standard Ubuntu 14.04 system.
I finally figured it out. The issue was related to this one.
I am using Docker 1.0 in my host machine, however a later version is in production in Docker Hub. Without the explicit ENV HOME=... line in the Dockerfile, version 1.0 uses / as home directory, while /root is used by the later version. The result is that vim was not able to find its .vimrc file, since it was copied in / instead of /root. The solution I used is to explicitly define ENV HOME=/root in my Dockerfile, so there are no differences between the two versions.