gitlab-ci docker | using build files outside container - docker

Hey fellow developers,
I use gitlab-ci, with my own gitlab-runner running as a docker image on the server (Ubuntu 22.04, Docker 20.10, Docker image: gitlab/gitlab-runner:latest).
It builds, using npm install && npm run build a CRA build, this works.
I want to use the generated CRA build (bunch of javascript & co files) outside of the docker container. Copy it in a /var/www/html folder on the server side (same server than the docker container).
How can I do that ?
Thanks for any help.
EDIT: I cannot use docker cp since I'm 'inside' the container when the gitlab-ci job runs.

I finally end up by creating a second runner with shell executor, building and deploying the app.
But there was a way to do what I did want, mount a volume in the container from the host where my build is used, and put the build in it, but it's more problems and code than using a second "shell" runner.
Ty

Related

Docker bind mount is empty inside a containerized TeamCity build agent with DOCKER_IN_DOCKER enabled

I'm running containerized build agents using the linux-sudo image tag and, using a Dockerfile (here) I have successfully customized it to suit our needs. This is running very well and successfully running the builds I need it to.
I am running it in a Swarm cluster with a docker-compose file (here), and I have recently enabled the DOCKER_IN_DOCKER variable. I can successfully run docker run hello-world in this container and I'm happy with the results. However I am having an issue running a small utility container inside the agent with a bind mount volume.
I want to use this Dockerfile inside the build agent to run npm CLI commands against the files in a mounted directory. I'm using the following command to run the container with a custom command and a volume as a bind mount.
docker run -it -v $(pwd):/app {IMAGE_TAG} install
So in theory, running npm install against the local directory that is mounted in the container (npm is the command in the ENTRYPOINT so I can just pass install to the container for simplicity). I can run this on other environments (Ubuntu and WSL) and it works very well. However when I run it in the linux-sudo build agent image it doesn't seem to mount the directory properly. If I inspect the directory in the running utility container (the npm one), the /app folder is empty. Shouldn't I expect to see the contents of the bind mount here as I do in other environments?
I have inspected the container, and it confirms there is a volume created of type bind and I can also see it when I list out the docker volumes.
Is there something fundamental I am doing wrong here? I would really appreciate some help if possible please?

How to add plugins to Neo4j Docker Container at build phase instead of dowloading plugins at container run?

I need my neo4j service to use few plugins. My Neo4j Docker build file is :
FROM neo4j:3.5.14
ENV NEO4J_AUTH='neo4j/password'
ENV NEO4J_ACCEPT_LICENSE_AGREEMENT='yes'
ENV NEO4JLABS_PLUGINS='["apoc", "graph-algorithms"]'
RUN echo "dbms.security.procedures.unrestricted=algo.*" >> ./conf/neo4j.conf
The part ENV NEO4JLABS_PLUGINS='["apoc", "graph-algorithms"]'
will download the listed plugins on container run and install them.
But, i have to run this container in a restricted environment , and i do not have internet access.
I cannot download the plugins on container run.
**What can i do to add these plugins at the build phase itself, so that they come ready with the image? **
You can try building and deploying your docker in a non-restricted environment, install everything, and then:
docker commit your-docker your-new-image-with-all-installed
docker save your-new-image-with-all-installed > your-image.tar
After that, you can take this image to your restricted (without internet) environment and deploy with:
docker load < your-image.tar
docker run ...

Docker inside docker with gitlab-ci.yml

I have created a gitlab runner.
I have choosen docker executor and an ubuntu default image.
I have put this at the top of my .gitlab-ci.yml file:
image: microsoft/dotnet:latest
I was thinking that gitlab-ci will load ubuntu image by default if there are no "images" directive in .gitlab-ci.yml file.
But, there is something strange: I am wondering now if gitlab-ci is not creating an ubuntu container and then creating a dotnet container inside the ubuntu container.
Here is a very ugly test i have done on gitlab server: I have removed /usr/bin/docker file and i have replaced it by a script which logs arguments.
This is very strange because jobs still working and i have nothing in my log file....
Thanks
Ubuntu image is indeed used if you didn't specify image but you did and your jobs should be run on the dotnet container without ever spinning up the ubuntu.
Your test behaves the way it does because docker is the client while dockerd is the deamon that gitlab runner actually calls.
If you want to check what's going on you should rather call docker ps to get a list of running containers.

How to run a shell and docker executor on the same unix host?

I would like to use the same host computer to execute Docker builds using the shell executor, as described in the link below, and normal builds using the docker executor.
I would like to be able to start builds of both types on the same host.
I would like to use the debian package provided for Ubuntu and installed via ant from the repository.
https://docs.gitlab.com/ce/ci/docker/using_docker_build.html
In other words, if I run a project to build docker containers, the shell executor should run the commands against docker. If I build a source code project, the docker executor should run my build inside a docker container.
Can someone please describe the steps required to achieve such a configuration.
Run gitlab-runner register multiple times. It will always append new configurations to the same /etc/gitlab-runner/config.toml file.

How do you mount sibling containers volumes started from another container?

I am using docker for my dev environment: I have a dev image and I mount my source files as a volume.
But then I wanted to do the same on my continuous integration server (gitlab ci) and I carefully read docker doc's reference to https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/ but the solution of bind-mounting docker's unix socket into a docker client container makes impossible mounting volumes from it.
So basically my question is how would you solve this (given I am in a docker ci server/runner): I need to run the following command from a container (a gitlab runner).
$ git clone ... my-sources && cd my-sources
$ docker run my-dev-image -v $PWD:$PWD -w $PWD gcc main.c
Because obviously, the volume is taken from docker's "native" host and not the current container.
Way I've solved this is making sure that build paths are SAME on host and CI container. e.g - starting container with -v /home/jenkins:/home/jenkins. This way we have volume mounted from host to CI container. You can change to whatever directory you like, just making sure that jenkins user's home is set to that directory.
Note: I'm using jenkins as example, but any CI will work with same principle
Make sure that your CI server is started with a volume (e.g. docker run --name gitlabci -v /src gitlabci …), then, when you start the other containers, start them with docker run --volumes-from gitlabci …. That way, /src will also be available in those containers, and whatever you put in this directory (from the CI server) will be available in the other containers.

Resources