I illustrated my problem through a diagram to complement my explanations:
To contextualize, I'm working with continuous integration, I have a host machine with jenkins running on that, and then through the script pipeline will execute containers A and B to perform their build (A) and static analysis (B) functions, in that order of execution. The reason to do the build beforehand is so that the compile_commands.json file is generated by MAKE.
The project is IoT(esp32) and I use the official espressif image to build it. The project folder is /var/lib/jenkins/workspace/iot-proj/ and it is on the jenkins machine (host) and I share its contents to containers through volumes. When the pipeline executes the command docker run -v $WORKSPACE/ESPComm:$WORKSPACE/ESPComm -w $WORKSPACE/ESPComm espressif/idf:v4.2.2 idf.py build, a build folder is created and inside it there is the file compile_commands.json, which will be used by cppcheck for static analysis. As this folder is shared across volumes, the compile_commands file later persists and container B imports smoothly.
The problem is that to compile the project there are espressif libraries in the opt/eso/idf/... folder and they are listed in the compile_commands.json and this folder is not accessible to container B when it runs cppcheck. How can I make this folder of espressif libraries that are in container A accessible in container B?
I tried this, but doesn't worked.. even with binfs
https://github.com/moby/moby/issues/26872
sudo ls /proc/$(docker inspect --format {{.State.Pid}} YOUR_CONTAINER_NAME)/root
I had already read that the volume is only from the host to the container, even so I tried the command below and the folder is really empty:
docker run -it -v $PWD:/project -v /opt/esp/idf/components/:/opt/esp/idf/components/ -w /project espressif/idf:v4.2.2
I thought I could easily solve my problem by adding these directories to a .suppressions file so that cppcheck ignores these files. I tried to do as I've done in other projects to pass the full path of the folder to be ignored, but it didn't work, the following error message appears in jenkins:
+ cppcheck --project=/var/lib/jenkins/workspace/iot-proj-TEST/ESPComm/build/compile_commands.json --inline-suppr --enable=all --suppress=missingInclude --suppress=unmatchedSuppression -- suppressions-list=/var/lib/jenkins/workspace/iot-proj-TEST/ESPComm/build/.suppressions --inconclusive --std=c11 --xml --xml-version=2 --std=c11
cppcheck: error: '/opt/esp/idf/components/xtensa/debug_helpers.c' from compilation database does not exist
cppcheck: error: failed to load project '/var/lib/jenkins/workspace/iot-proj-TEST/ESPComm/build/compile_commands.json'. An error occurred.
Related
Goal
I am using Docker to run JMeter in Azure Devops. I am trying to use Blazemeter's Parallel Controller, which is not native to JMeter. So, according to the justb4/jmeter image documentation, I used the following command to get the image going and run the JMeter test:
docker run --name jmetertest -i -v /home/vsts/work/1/s/plugins:/plugins -v $ROOTPATH:/test -w /test justb4/jmeter ${#:2}
Error
However, it produces the following error while trying to accommodate for the plugin (I know the plugin makes the difference due to testing without the plugin):
cp: can't create '/test/lib/ext': No such file or directory
As far as I understand, this is an error produced when one of the parent directories of the directory you are trying to make does not exist. Is there something I am doing wrong, or is there actually something wrong with the image?
References
For reference, I will include links to the image documentation and the repository.
Image: https://hub.docker.com/r/justb4/jmeter
Repository: https://github.com/justb4/docker-jmeter
Looking into the Dockerfile:
ENV JMETER_HOME /opt/apache-jmeter-${JMETER_VERSION}
Looking into entrypoint.sh
if [ -d /plugins ]
then
for plugin in /plugins/*.jar; do
cp $plugin $(pwd)/lib/ext
done;
fi
It basically copies the plugins from /plugins folder (if it is present) to /lib/ext folder relative to current working directory
I don't know why did you add this stanza -w /test to your command line but it explicitly "tells" the container that local working directory is /test, not /opt/apache-jmeter-xxxx, that's why the script is failing to copy the files.
In general I don't think that the approach is very valid because:
In Azure DevOps you won't have your "local" folder (unless you want to add plugins binaries under the version control system)
Some JMeter Plugins have other .jars as the dependencies so when you're installing the plugin you should:
put the plugin itself under /lib/ext folder of your JMeter installation
put the plugin dependencies under /lib folder of your JMeter installation
So I would recommend amending the Dockerfile, download JMeter Plugins Manager and installed the plugin(s) you need from the command line
Something like:
RUN wget https://jmeter-plugins.org/get/ -O /opt/apache-jmeter-${JMETER_VERSION}/lib/ext/jmeter-plugins-manager.jar
RUN wget https://repo1.maven.org/maven2/kg/apc/cmdrunner/2.2/cmdrunner-2.2.jar -P /opt/apache-jmeter-${JMETER_VERSION}/lib/
RUN java -cp /opt/apache-jmeter-${JMETER_VERSION}/lib/ext/jmeter-plugins-manager.jar org.jmeterplugins.repository.PluginManagerCMDInstaller
RUN /opt/apache-jmeter-${JMETER_VERSION}/bin/./PluginsManagerCMD.sh install bzm-parallel
I am trying to build an image of a simple Spring app with base image of Websphere liberty installed as a Root application.
While it builds locally and runs fine, the same image is not built properly when building using Kaniko (used by Jenkins).
Sample project - https://github.com/dhananjay12/ci-cd-spring-project
Docker file for wslc - https://github.com/dhananjay12/ci-cd-spring-project/blob/master/Dockerfile-wslc
FROM websphere-liberty:18.0.0.4-javaee7
# Copy war file to apps folder
ADD ./target/ci-cd-spring-project*.war config/apps/ci-cd-spring-project.war
# Define the root context path for application
RUN sed -i "0,/<\/server>/s/<\/server>/ <webApplication contextRoot=\"\/\" location=\"ci-cd-spring-project.war\" \/>\n\n&/" config/server.xml
Locally it builds and runs fine.
While building from kaniko, the image is sort of corrupted and while running I get the following error
C:\Program Files\Docker\Docker\Resources\bin\docker.exe: failed to register layer: Error processing tar file(exit status 1): mkdir /config/apps: no such file or directory.
See 'C:\Program Files\Docker\Docker\Resources\bin\docker.exe run --help'.
Any suggestions?
The problem was that /config was a symlink
config -> /opt/ibm/wlp/usr/servers/defaultServer
Addin files directly to /opt/ibm/wlp/usr/servers/defaultServer/apps/ solved the problem
Using this docker image:
docker build -t batocera-docker https://github.com/batocera-linux/batocera.docker.git
I launch a container this way, so that the sources are available in F:\docker Windows folder for browsing.
docker run -it -v F:\docker:/build batocera-docker
The following commands start the build process:
git clone git://git.buildroot.net/buildroot
cd buildroot/
make pc_x86_64_bios_defconfig
make
Which fails when processing the "host-gmp" component:
>>> host-gmp 6.1.2 Building
The build fails with the following error (but from experiment, it seems it does not always fail on the same files).
m4: cannot open `invert_limb_table.asm': No such file or directory
This is a "strange" because, the following command shows the file exists where it should (and issuing the "cat" command shows a valid file content!).
root#fe9bc1b08539:/build/buildroot# ls -la
/build/buildroot/output/build/host-gmp-6.1.2/mpn/invert_limb_table.asm lrwxrwxrwx 1 root root 35 Feb 12 22:01 /build/buildroot/output/build/host-gmp-6.1.2/mpn/invert_limb_table.asm -> ../mpn/x86_64/invert_limb_table.asm
Sometimes, the error states "File handle stale".
However, such errors always occur on symbolic links files (symlinks or hardlinks?)
I'm confused because creating a symbolic link in the mounted folder seems to work (it works using the ln command), but then it fails at some point, as if the overlay file system of the container was not synchronizing its content with the mounted folder "fast enough"?
Would there be any work around?
(I could build in a container folder, but that is trivial and not much useful to me as the sources are not available from outside).
I have a project which compiles to a binary file, and running that binary file exposes some REST APIs.
To compile the project I need docker image A which has the compiler and and all the libraries required to produce the executable. To run the executable (ie. host the service) I can get away with a much smaller image B (just basic linux distro, no need for the compiler).
How does one use docker is such a situation?
My thinking for this scenario is that you can prepare two base images:
The 1st one, which includes compiler and all libs for building your executable, call it base-image:build
The 2nd one, as the base image to build your final image to delivery, call it base-image:runtime
And then break your build process into two steps:
Step 1: build your executable inside base-image:build, and then put your executable to some place, like NFS or any registry from where you can fetch it for later use;
Step 2: write your Dockerfile which FROM base-image:runtime, fetch your artifact/executable from wherever generated by Step 1, docker build your delivery image, and then docker push to your registry for release.
Hope this could be helpful :-)
mkdir local_dir
docker run -dv $PWD/local_dir:/mnt BUILD_CONTAINER
compile code and save it to /mnt in the container. It'll be written to local_dir on your host filesystem and persist after the container is destroyed.
You Should now write a Dockerfile and add a step to copy in the new binary, then build. But for example's sake...
docker run -dv $PWD/local_dir:/mnt PROD_CONTAINER
Your bin, and everything else in local_dir, will reside in the container at /mnt/
I have setup an automated build on Docker hub here (the sources are here).
The build goes well locally. I have also tried to rebuild it with --no-cache option:
docker build --no-cache .
And the process completes successfully
Successfully built 68b34a5f493a
However, the automated build fails on Docker hub with this error log:
...
Cloning into 'nerdtree'...
[91mVim: Warning: Output is not to a terminal
[0m
[91mVim: Warning: Input is not from a terminal
[0m
[m[m[0m[H[2J[24;1HError detected while processing command line:
E492: Not an editor command: PluginInstall
E492: Not an editor command: GoInstallBinaries
[91mmv: cannot stat `/go/bin/*': No such file or directory
[0m
This build apparently fails on the following vim command:
vim +PluginInstall +GoInstallBinaries +qall
Note that the warnings Output is not to a terminal and Input is not to a terminal appears also in the local build.
I cannot understand how this can happen. I am using a standard Ubuntu 14.04 system.
I finally figured it out. The issue was related to this one.
I am using Docker 1.0 in my host machine, however a later version is in production in Docker Hub. Without the explicit ENV HOME=... line in the Dockerfile, version 1.0 uses / as home directory, while /root is used by the later version. The result is that vim was not able to find its .vimrc file, since it was copied in / instead of /root. The solution I used is to explicitly define ENV HOME=/root in my Dockerfile, so there are no differences between the two versions.