Using gitlab-runner locally and passing artifacts from previous step - docker

I am trying to debug some issues in my GitLab CI pipeline. I have a step B which is using some artifacts from step A.
Step A is very long (and is working in the CI), so I don't want to run it locally: I just download the resulting artifacts from GitLab. So I have an artifacts.zip, which I extracted to obtain an output and a logs directory. So far so good.
I want to run step B locally, using gitlab-runner. Note that I am using version 9.5 (https://docs.gitlab.com/runner/install/old.html).
I am using this command:
gitlab-runner exec docker step-b
As I explained, step-b needs the artifacts from step-a. This is what I tried:
gitlab-runner exec docker --docker-volumes ~/Downloads/output step-b
One of the script executed in step B is doing something like mv ../output /some/where/else. However, this script fails with the following error:
mv: cannot stat '../output': No such file or directory
Following this error, I have two questions:
Where is this script executed? It's called like that from the .gitlab-ci.yml:
./scripts/my_script.sh.
What is the . in this context?
How can I make sure that using the --docker-volumes ~/Downloads/output will mount the directory in the right place so my script can find it?
EDIT
As requested, here is a description of step A.
script:
- mkdir -p /usr/local/compil_result
- ./scripts/compil.sh
- mv /usr/local/compil_result ./output
artifacts:
paths:
- output
- logs

Since you're not mentioned what docker image you use, I assume it's a custom image you or your colleague have made. I think you need to check the Dockerfile of your docker image back, to make sure where the working directory of that script is.
Or you could also try to get inside the shell, and see the structure inside your container first,
docker run --rm -it --entrypoint /bin/bash your-image-name
To mount a Docker volume, you need to host directory and container directory separated by a colon, and use full path directory for both of them.
Something like this,
gitlab-runner exec docker --docker-volumes '/home/username/Downloads/output:/output' step-b

Related

Cypress in docker can't find cypress.json file

I'm struggling with testing my app with my Cypress with docker, I use the dedicated docker image with this command : docker run -it -v $PWD:/e2e -w /e2e cypress/included:8.7.0
I have ALWAYS this error when I launch it : `Could not find a Cypress configuration file, exiting.
We looked but did not find a default config file in this folder: /e2e`
Meaning that cypress can't find the cypress.json but it is precisely in the dedicated folder, here is my directory/file tree :
pace
front
cypress
cypress.json
So this is a standard file tree for e2e testing, and despite all of my tricks (not using $PWD but using full directory path, reinstall docker, colima engine etc. nothings works, and if I run npm run cypress locally everything works just fine !
Needless to say that I am in the /pace/front directory when I'm trying these commands
Can you help me please ?
The -v $PWD:/e2e is a docker instruction to mount a volume (a bind mount). It mounts the current directory to /e2e inside the docker container at runtime.
In the docs it mention a structure where it expects the cypress.json file to end up directly under /e2e. To get it do be like this you have to do either:
-v $PWD/pace/front:/e2e
run the command from inside the pace/front directory
Since the CMD and ENTRYPOINT commands in docker run from the WORKDIR you could also try running it from where you were but changing the workdir as:
-w /e2e/pace/front
I have not seen their dockerfile, but my assumption is that that would work.
My personal choice would be to just run it from pace/front

How to copy a file from the repository, into the Docker container used for a job, in gitlab-ci.yml

How can I add a file from my project into a Docker using in a gitlab-ci job. Suppose I have below job in my .gitlab-ci.yml .
build:master:
image: ubuntu:latest
script:
- cp sample.txt /sample.txt
stage: build
only:
- master
How to copy a sample.txt inside Ubuntu image? I was thinking as it is already a running container so we can't perform copy command directly but have to run
docker cp sample.txt mycontainerID:/sample.txt
but again how will I get mycontainerID? because it will be running inside a Gitlab runner and any random id will be assigned for every run. Is my assumption is wrong?
The file is already inside the container. If you read carefully through the CI/CD build log, you will see at the very top after pulling the image and starting it, your repository is cloned into the running container.
You can find it under /builds/<organization>/<repository>
(note that these are examples, and you have to adjust to your actual organization and repository name)
Or with the variable $CI_PROJECT_DIR
In fact, that is the directory you are in when starting the job.
For example, this .gitlab-ci.yml:
image: alpine
test:
script:
- echo "the project directory is - $CI_PROJECT_DIR"
- cat $CI_PROJECT_DIR/README.md ; echo
- echo "and where am I? - $PWD"
returns this pipeline output:
As you can see, I could print out the content of the README.md, inside the container.
We do not need to copy. The repository files will be available in the image. GitLab does that for us.
Type to use ls(linux) or dir(windows) command depending your platform to list files and folders.
Your runner is already executing script in your docker container.
What your job does here is:
start a container using Ubuntu image and mount your Git project in
there.
cp sample.txt from Git project's root to container's
stop the container saying "job done"
That's basically what image means: use this docker image to start a container that will execute the commands listed in the script part.
I don't exactly understand what you're trying to achieve. If it's a build job, then why don't you actually COPY your file from your Dockerfile and configure your job to build it with docker build ? A Runner shell executor doing docker build -t your/image:latest -f build/Dockerfile . will do just fine. Then you push this image in some Docker registry (Gitlab's for example, or Docker Hub).
If really your goal is more complex and you want to just add a file to a running container, you can use the same Runner (with a shell executor, not a docker one, so no image) and run something like
- docker run --name YOUR_CONTAINER_NAME -v $PWD:/mnt ubuntu:latest cp /mnt/sample.txt /sample.txt
- docker commit -m "Commit Message" -a "You" YOUR_CONTAINER_NAME your/image:latest
- docker push your/image:latest
- docker rm YOUR_CONTAINER_NAME
Note: I'm not 100% sure the first one would work, but that would be the general idea of creating an image from a container without relying on the actual Dockerfile if really you can't achieve your goal with a Dockerfile.

Docker build using volumes at build time

Is there a way to use external volumes during the docker image build ?
I have a situation where I would like to use a configuration inside a external volume during the docker image build time. Is that possible?
(edited to reflect current Docker CLI behavior)
If by 'docker image build' you mean running a single 'docker build ...' command: no, there is no way to do that (at least, not in the most recent documentation that I have read). However, nothing prevents you from performing the step that needs the external volume using direct docker commands and then commit the container and tag it just as 'docker build' would. Assuming this is the last step in your build, put all other commands (that don't need the volume) into a Dockerfile and then do this:
tmp_img=`docker build .`
tmp_container=`docker run -d -v $my_ext_volume:$my_mount_path --entrypoint=(your volume-dependent build command here) $tmp_img`
docker wait "$tmp_container"
docker commit $tmp_container my_repo/image_tag:latest
docker rm "$tmp_container"
This does the same as having a RUN command in the Dockerfile, but with the added volume mount. The commit command in the example also tags the image.
It is a bit more complex if you need to have other Dockerfile commands after the volume-dependent one, but in most cases you can combine run commands and re-arrange your install in a way that leaves the manual run-with-volume command last, to keep things simple.
You can copy the file into the docker image (ADD) and rm as one of the last steps
podman is an alternative to Docker that has an api that is the same BUT also supports mounting volumes at buildtime.
I use this to load data into testdatabases without having to copy the data into the image first.
You can use ADD combined with ARG (build time parameters) to access files or directories during the build without having to hardcode their location.
ARG MAVEN_SETTINGS=settings.xml
ADD $MAVEN_SETTINGS ./
And now you can change the file location during the build with:
docker build --build-arg MAVEN_SETTINGS=someotherfile.xml
We are not restricted to Docker to build OCI images.
With buildah it's possible to mount volumes from the host that won't be persisted in the final image. Useful for configuration and secrets.
buildah bud --volume /home/test:/myvol:ro -t imageName .

Copy entire directory from container to host

I'm trying to copy an entire directory from my docker image to my local machine.
The image is a keycloak image, and I'd like to copy the themes folder so I can work on a custom theme.
I am running the following command -
docker cp 143v73628670f:keycloak/themes ~/Development/Code/Git/keycloak-recognition-login-branding
However I am getting the following response -
Error response from daemon: Could not find the file keycloak/themes in container 143v73628670f
When I connect to my container using -
docker exec -t -i 143v73628670f /bin/bash
I can navigate to the themes by using -
cd keycloak/themes/
I can see it is located there and the files are as expected in the terminal.
I'm running the instance locally on a Mac.
How do I copy that entire themes folder to my local machine? What am I doing wrong please?
EDIT
As a result of running 'pwd' your should run the Docker cp command as follows:
docker cp 143v73628670f:/opt/jboss/keycloak/themes ~/Development/Code/Git/keycloak-recognition-login-branding
You are forgetting the trailing ' / '. Therefore your command should look like this:
docker cp 143v73628670f:/keycloak/themes/ ~/Development/Code/Git/keycloak-recognition-login-branding
Also, you could make use of Docker volumes, which allows you to pass a local directory into the container when you run the container

How to give Dockerfile input parameters from docker run command

FROM centos
RUN yum -y update
ENV zk=dx
RUN mkdir $zk
after building image and after running fallowing command
docker run -it -e zk="hifi" <image ID>
I get a directory with name dx but not with hifi
can anyone help me how to set a Dockerfile variable from docker run command
This has behaved this way because:
The RUN commands in the Dockerfile are executed when the Docker image is built (like almost all Dockerfile instructions) - ie. when you run docker build
The docker run command runs when the container is run from the image.
So when you run docker run and set the value to "hifi", the image already exists which has a directory called "dx" in it. The directory creation task has already been performed - updating the environment variable to "hifi" won't change it.
You cannot set a Dockerfile build variable at run time. The build has already happened.
Incidentally, you're over-writing the value of the zk variable right before you create the directory. If you did successfully pass "hifi" into the docker build, it would be over-written and the folder would always be called "dx".

Resources