I am running CircleCI with Docker and then testing my code, after it is tested the test coverage percentage is recorded to a txt file which I want to copy to the artifacts folder.
- run:
name: Run test coverage
command: |
docker-compose exec api mkdir /tmp_reports
docker-compose exec api coverage report > /tmp_reports/coverage_output.txt
docker cp api:/tmp_reports/coverage_output.txt /tmp/coverage_results
- store_artifacts:
path: /tmp/coverage_results
CircleCI Error
/bin/bash: line 1: /tmp_reports/coverage_output.txt: No such file or directory
Exited with code 1
I have ran this locally and copied the file from the docker container to my local directory, but circle ci seems to have issue with this. Can some one point me in the right direction here, thanks.
On the second line in your script, where you send output to the file > /tmp_reports/coverage_output.txt, that writes to a file outside the container. The redirect of output is handled by bash outside the container.
So on line 1 you create the directory inside the container, and line 2 it fails because the directory /tmp_reports does not exist outside the container.
You can fix this by replacing all 3 lines with:
mkdir -p /tmp/coverage_results
docker-compose exec api coverage report > /tmp/coverage_results/coverage_output.txt
Related
Use-case: copy a file containing some creds from local machine directory to existing and already created Docker container/volume
Per the documentation on using docker cp, I constructed my command line statement like this:
docker cp mynodered:/Users/<myUserName>/Documents/nodered-volume/creds.json /data/creds.json
However, I consistently get an error returned:
invalid output path: directory "/data" does not exist
Eventually, I found that changing the syntax of the docker cp statement to:
docker cp /Users/<myUserName>/Documents/nodered-volume/creds.json mynodered:/data/creds.json resolved the issue
troubleshooting tl;dr
I didnt see this documented anywhere, but the syntax that worked for me was docker cp <current local filepath> containerName:/<intended container filepath>
Make sure there is not a space between containerName: and /<intended container filepath>
However, I consistently get an error returned: invalid output path:
directory "/data" does not exist
Above error message you're getting since such directory doesn't exists
# ensure /data exists if not create directory
mkdir -pv /data
# now copy whatever from container to host directory
docker cp <container-id-or-name>:/absolute/path/of/your/file /data
Trying to run a CLI command using a Pact image as part of Gitlab pipeline. However it is failing as Docker could not find the directory (target/pacts). Below are command and error details.
Command:
docker run pactfoundation/pact-cli:latest broker publish target/pacts --consumer-app-version=$CI_COMMIT_SHORT_SHA --tag=$CI_COMMIT_REF_NAME --broker-base-url=http://localhost:9090
Error:
Error making request - Errno::ENOENT No such file or directory # rb_sysopen - /target/pacts
/usr/lib/ruby/gems/2.7.0/gems/pact_broker-client-1.29.1/lib/pact_broker/client/pact_file.rb:32:in `read', attempt 1 of 3
As part of pipeline I have run ls target/pacts command just before docker command, and it shows that the directory exists.
I tried to map the the target directory using -v option as below but it still gives the same error.
Altered Command:
docker run -v $(pwd)/target:/target pactfoundation/pact-cli:latest broker publish /target/pacts --consumer-app-version=$CI_COMMIT_SHORT_SHA --tag=$CI_COMMIT_REF_NAME --broker-base-url=http://localhost:9090
Gitlab pipeline step
contract-publishing:
image: docker:latest
stage: contract-publish
tags:
- docker-privileged
before_script:
- export
- pwd
- ls -al
- ls target/pacts
script:
- >
docker run -v $(pwd)/target:/target pactfoundation/pact-cli:latest
broker publish /target/pacts
--consumer-app-version=$CI_COMMIT_SHORT_SHA
--tag=$CI_COMMIT_REF_NAME
--broker-base-url=http://localhost:9090
Please help.
It seems likely this is a docker related problem - the error is pretty clear. I'd take out the pact image and try something like this:
docker run -v $(pwd)/target:/target debian:latest ls /target/pacts
If that doesn't work, it might be that variable expansion or some other configuration in your gitlab setup is incorrect.
I have the following script thats run in my jenkins job
set +x
SERVICE_ACCOUNT=`cat "$GCLOUD_AUTH_FILE"`
docker login -u _json_key -p "${SERVICE_ACCOUNT}" https://gcr.io
set -x
docker pull gcr.io/$MYPROJECT/automation:master
docker run --rm --attach STDOUT -v "$(pwd)":/workspace -v "$GCLOUD_AUTH_FILE":/gcloud-auth/service_account_key.json -v /var/run/docker.sock:/var/run/docker.sock -e "BRANCH=master" -e "PROJECT=myproject" gcr.io/myproject/automation:master "/building/buildImages.sh" "myapp"
if [ $? -ne 0 ]; then
exit 1
fi
I am now trying to do this in cloudbuild.yaml such that I can run my script using my own automation image (which has a bunch of dependencies docker/jdk/pip etc installed) , and mount my git folders in my workspace directory
I tried putting my cloudbuild.yaml at the top level in my directory in my git repo and set it up as this
steps:
- name: 'gcr.io/myproject/automation:master'
volumes:
- name: 'current-working-dir'
path: /mydirectory
args: ['bash', '-c','/building/buildImages.sh', 'myapp']
timeout: 4000s
But this gives me errors saying the
invalid build: Volume "current-working-dir" is only used by one step
Just FYI, my script buildImages.sh, copies folders and dockerfiles, runs pip install/ npm/ and gradle commands and then docker build commands (kind of all in one solution).
Whats the way to translate my script to cloudbuild.yaml
try this in your cloudbuild.yaml:
steps:
- name: 'gcr.io/<your-project>/<image>'
args: ['sh','<your-script>.sh']
using this I was able to pull the image from Google Cloud Registry that has my script, then run the script using 'sh'. It didn't matter where the script is. I'm using alpine in my Dockerfile as base image.
I have the following image:
FROM some:image
ADD app /app
ADD https://get.aquasec.com/microscanner /
RUN chmod +x /microscanner
RUN /microscanner my_xxx_token >> /microscan.log
the RUN /microscanner command outputs a bunch of stuff. If I don't >> pipe it to a file, it's gonna be printed to my console.
What I want is to pipe it directly to the host, because the command, if having a false result, interrupts the build of the image, so I cannot execute any other Docker commands after it. Even a && cat /microscan.log on the same RUN command after the microscanner will not execute.
I tried doing >> /app/microscanner.log since the folder is used as a shared volume, but the file doesn't appear on the host.
So, I assume that I have to write from the container directly into a file on the host. Is that a possibility at all?
I am using CI pipelines on Gitlab to build docker images for deployment to Raspbian. Since my builds need to access some private NPM packages, I include in the Docker file the following line which creates a token file using the value stored in environment variable $NPM_TOKEN:
RUN echo //registry.npmjs.org/:_authToken=$NPM_TOKEN > ~/.npmrc
This works fine when building from my usual image (resin/raspberrypi3-node). However one of my containers is built from armhf/ubuntu. When the above line is executed, the build fails with the following error:
standard_init_linux.go:207: exec user process caused "no such file or directory"
The command '/bin/sh -c echo //registry.npmjs.org/:_authToken=$NPM_TOKEN >> ~/.npmrc' returned a non-zero code: 1
The build runs fine from docker build on my development machine (Windows 10) but not within the gitlab pipeline.
I have tried stripping down my docker and pipeline files to the bare minimum, and removed the environment variable and the tilde from the path, and this still fails for the ubuntu (but not the resin) image.
Dockerfile.test.ubuntu:
FROM armhf/ubuntu
RUN echo hello > world.txt
Dockerfile.test.resin:
FROM resin/raspberrypi3-node
RUN echo hello > world.txt
gitlab-ci.yml:
build_image:
image: docker:git
services:
- docker:dind
script:
- docker build -f Dockerfile.test.resin . # Succeeds
- docker build -f Dockerfile.test.ubuntu . # Fails
only:
- master
I have searched for similar issues and have seen this error reported when running a .sh file which contained CRLF combinations. Although I am developing on Windows, my IDE (VS Code) is set up to use LF, not CRLF and I have checked all the above files for compliance.
As in here, try and use double-quotes for your echo argument:
RUN echo "//registry.npmjs.org/:_authToken=$NPM_TOKEN" > ~/.npmrc
And first, in your Dockerfile, do a RUN ls -alrth ~/ to check the accessibility/presence of the target folder.
That error was also reported in this thread (without any answer), with an example where the final version of the Dockerfile, as seen here, use this .gitlab-ci.yml.
The OP bighairdave confirms in the comments:
I copied the following from the example #VonC gave, and it worked:
variables:
DOCKER_HOST: "tcp://docker:2375"
DOCKER_DRIVER: overlay2
before_script:
- docker run --rm --privileged hypriot/qemu-register