AzureDevops Buildpipeline can not recognise docker volume mount - docker

In my build pipeline, I'm trying to run the below task:
The main responsibility of the task is to mount the volume for Test and the Script from the AzDo to the container's working dir which is /app/ and then run the test(basically it will run npm test inside the container). But unfortunately, I don't see any outcomes. Hence I changed the command as ls -ltrR /app at the end of the docker run to check if the files are copied or not. But I see the directory is created but no files are inside.
So, to prove the files exist in $(System.DefaultWorkingDirectory) for AzDo. I tried to run all kinds of ls commands prior to docker-run which shows that the files do exist in the $(System.DefaultWorkingDirectory). But for some reason files are not mapped in docker containers. I tried to use PWD and $(Build.SourcesDirectory) but in all cases, it is not working.
I tried to replicate the same docker run command using in my local workstation, it works as expected. So, can anyone suggest how to mount the files in the docker run using AzDo build task?
- task: Bash#3
inputs:
targetType: 'inline'
script: |
echo "check in if Scripts and Tests file exists"
ls -Rtlr ./Script
cat ./Script/dev_blueprint_jest2/MyJavaScript.js
ls -ltrR $PWD/Script
ls -ltr $PWD/Test
ls -ltrR $(Build.SourcesDirectory)/Script/
docker run -v $(System.DefaultWorkingDirectory)/Script/:/app/ScriptStages/ -v $(System.DefaultWorkingDirectory)/Test/:/app/Test/ -i dockerimage ls -ltrR /app/
workingDirectory: $(System.DefaultWorkingDirectory)
displayName: 'Run the JS test pipeline.

Related

Target directory not found in Docker container

I came across such a problem. I'm locally building my docusaurus site via Docker container.
From inside a docusaurus directory I run such a command:
docker run -it --rm --name doc-lab --mount type=bind,source=D:\work\some_path,target=/target_path -p 3000:3000 doc-lab
And then when container is up, I run inside container terminal command:
npm --prefix=/target_path run build
And I get the following:
docusaurus: not found
Although there is such a directory:
# cd /
# ls
bin boot dev target_path etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
# npm --prefix=/target_path run build
> target_path#0.0.1 build
> docusaurus build
sh: 1: docusaurus: not found
What went wrong?
Successfully running a command. Site opens at localhost.
Usually npm run start is used to run a development version and npm run build is used to prepare the files to be deployed to production environment. So in your case I think npm run build should be run either with a RUN directive in Dockerfile or even on your computer, before building the Docker image, and then the results can be copied to the target directory. And the CMD command in Dockerfile will then contain the command to run the production server. You can check the scripts section of packages.json file to see the actual commands behind npm run start and npm run build
Well, that was not so simple. Just because I didn't create docker image by myself, but downloaded it I needed to run
npm install
And that was the answer.

Cypress in docker can't find cypress.json file

I'm struggling with testing my app with my Cypress with docker, I use the dedicated docker image with this command : docker run -it -v $PWD:/e2e -w /e2e cypress/included:8.7.0
I have ALWAYS this error when I launch it : `Could not find a Cypress configuration file, exiting.
We looked but did not find a default config file in this folder: /e2e`
Meaning that cypress can't find the cypress.json but it is precisely in the dedicated folder, here is my directory/file tree :
pace
front
cypress
cypress.json
So this is a standard file tree for e2e testing, and despite all of my tricks (not using $PWD but using full directory path, reinstall docker, colima engine etc. nothings works, and if I run npm run cypress locally everything works just fine !
Needless to say that I am in the /pace/front directory when I'm trying these commands
Can you help me please ?
The -v $PWD:/e2e is a docker instruction to mount a volume (a bind mount). It mounts the current directory to /e2e inside the docker container at runtime.
In the docs it mention a structure where it expects the cypress.json file to end up directly under /e2e. To get it do be like this you have to do either:
-v $PWD/pace/front:/e2e
run the command from inside the pace/front directory
Since the CMD and ENTRYPOINT commands in docker run from the WORKDIR you could also try running it from where you were but changing the workdir as:
-w /e2e/pace/front
I have not seen their dockerfile, but my assumption is that that would work.
My personal choice would be to just run it from pace/front

azuredevops docker build failing while coping the files after the build

I have below task in my build pipeline:
- bash: |
echo "To test file exists and can be copied inside DOckerfile"
cat $(System.DefaultWorkingDirectory)/configCreate/properties/config.properties
ls -ltr $(System.DefaultWorkingDirectory)/configCreate/pipelines/
cd $(System.DefaultWorkingDirectory)/configCreate/
echo $(pwd)
ls -ltr $(System.DefaultWorkingDirectory)/configCreate/
echo "Started Building Docker Image"
docker build -t test -f Dockerfile .
displayName: 'Build Docker Image'
But it fails, while coping configCreate/properties/config.properties inside the dockefile.
Dockerfile
COPY properties/config.properties ${SDC_HOME}
Also I tried to pass the workingDIR as $(System.DefaultWorkingDirectory)/configCreate during the Docker build command as argument and my Docker file would be then
COPY ${workingDIR}/configCreate/properties/config.properties ${SDC_HOME}
Everytime it fails stating No such file or directory
Is there something I should do so that files can be copied ?
You have to pass workingDirctory context like below in order to files to be copied properly inside Docker.
- bash: |
cd $(System.DefaultWorkingDirectory)/configCreate/
echo "Started Building Docker Image with tag $latestECRTag"
docker build -t test:latest -f Dockerfile .
displayName: 'Build Docker Image with Tag.'
workingDirectory: $(System.DefaultWorkingDirectory)/configCreate/
now inside Dockerfile copy is working fine.
COPY properties/config.properties ${SDC_HOME}
Missing this workingDirectory will cause no files or directory error.

how to copy file from docker container to host using shell script?

I have created an image, which is an automation project. when I run container it executes all test inside the container then it generates the test report. I want to take this report out before deleting container.
FROM maven:3.6.0-ibmjava-8-alpine
COPY ./pom.xml .
ADD ./src $HOME/src
COPY ./test-execution.sh /
RUN mvn clean install -Dmaven.test.skip=true -Dassembly.skipAssembly=true
ENTRYPOINT ["/test-execution.sh"]
CMD []
Below is shell file
#!/bin/bash
echo parameters you provided : "$#"
mvn test "$#"
cp api-automation:target/*.zip /Users/abcd/Desktop/docker_report
You will want to use the docker cp command. See here for more details.
However, it appears docker cp does not support standard unix globbing patterns (i.e * in your src path).
So instead you will want to run:
docker cp api-automation:target/ /Users/abcd/Desktop/docker_report
However, then you will have to have a final step to remove all the non-zip files from your docker_report directory.

Why is git clone failing when I build an image from a dockerfile?

FROM ansible/ansible:ubuntu1604
MAINTAINER myname
RUN git clone http://github.com/ansible/ansible.git /tmp/ansible
RUN git clone http://github.com/othertest.git /tmp/othertest
WORKDIR /tmp/ansible
ENV PATH /tmp/ansible/bin:/sbin:/usr/sbin:/usr/bin:bin
ENV PYTHONPATH /tmp/ansible/lib:$PYTHON_PATH
ADD inventory /etc/ansible/hosts
WORKDIR /tmp/
EXPOSE 8888
When I build from this dockerfile, I get Cloning into /tmp/ansible and othertest in red text (I assume is an error). When I then run the container and peruse around, I see that all my steps from the dockerfile built correctly other than the git repositories which are missing.
I can't figure out what I'm doing wrong, I'm assuming its a simple mistake.
Building dockerfile:
sudo docker build --no-cache -f Dockerfile .
Running dockerfile:
sudo docker run -I -t de32490234 /bin/bash
The short answer:
Put your files anywhere other than in /tmp and things should work fine.
The longer answer:
You're basing your image on the ansible/ansible:ubuntu1604 image. If you inspect this image via docker inspect ansible/ansible:ubuntu1604 or look at the Dockerfile from which it was built, you will find that it contains a number of volume mounts. The relevant line from the Dockerfile is:
VOLUME /sys/fs/cgroup /run/lock /run /tmp
That means that all of those directories are volume mount points, which means any data placed into them will not be committed as part of the image build process.
Looking at your Dockerfile, I have two comments unrelated to the above:
You're explicitly setting the PATH environment variable, but you're neglecting to include /bin, which will cause all kinds of problems, such as:
$ docker run -it --rm bash
docker: Error response from daemon: oci runtime error: exec: "bash": executable file not found in $PATH.
You're using WORKDIR twice, but the first time (WORKDIR /tmp/ansible) you're not actually doing anything that cares what directory you're in (you're just setting some environment variables and copying a file into /etc/ansible).

Resources