I'm trying to run a pipeline on a Docker-Windows Executor on my own Container. When running the pipeline I get the error detected dubious ownership in repository at .... The error message also suggests adding the directory as a safe directory git config --global --add safe.directory <directory>.
I have tried doing this on my Host, but it didn't fix the issue. I'm assuming this is because this applies the setting to the git configuration on the host, while the problem arises in the Container.
Where do I need to run the command to solve the issue?
And if I have to run it in the container, do I have to run it every time I run the pipeline? I'm assuming yes, because the container gets deleted after the run.
To add a bit of background information: On my development machine (Win11 Pro), I'm running a gitlab-runner with docker-windows setting as well as Docker desktop, which has the windows image I'd like to use. While the pipeline is running, I can see the helper spin up in Docker a couple of times, but the second one fails with Error(1).
The problem, as described above is that the safe.directory needs to be added 1) before pulling and 2) every time, because the container gets deleted after the run.
This can be done in the config.toml of the gitlab runner, by adding an entry in the [[runners]] section: pre_clone_sript = "git config --global --add safe.directory <directory>"
Related
I'm familar with Bamboo but new to gitlab ci, I have tried several times with gitlab and found a key advantage of gitlab is the automatic cloning of git repository.
The tricky part is gitlab ci can even clone repository into docker container automatically.
my git repo:
.git
.gitlab-ci.yml
foobar.sh
this job:
job1:
stage: run
image:
name: my_image
script:
- ./foobar.sh
- some other scripts within the docker
can successfully run.
The log shows after pulling my_image, there's a git clone action, like what another SO answer said. but the log isn't detail enough to let me know where this command is triggered(I'm not the owner of gitlab ci runner so cannot control the log verbose level, if it matters).
So my questions:
Is this git clone command run within or outside the docker?
If within, who triggered it? what's the complete command of docker run ...?
If outside, when and where is the directory mounted to docker?
I have read the docs, but didn't find anywhere to explain above mechanism.
See, the gitlab runner pulls the image and spins up a container. Then from inside the container, a git clone of that gitlab repo is performed (by the gitlab runner). It's not from outside, and nothing is mounted. It works only with the repo where the pipeline belongs to.
If you wish to clone another repo, you have to do it manually by either baking it into your image upfront or telling the gitlab runner to perform another git clone.
script:
- git clone https://github.com/bluebrown/dotfiles
I assume, that when git is not installed in the container, it causes problems.
I have a Jenkins setup in a docker container in my local computer.
Can I move it to a company's CI server and re-use job items?
I tried this at local computer
docker commit
docker push
At CI server
docker pull
docker run
However, when I run Jenkins on CI server, Jenkins was initialized.
How can I get all the configurations and job items using Docker?
As described in the docs for the commit command
The commit operation will not include any data contained in volumes
mounted inside the container.
The jenkins home is mounted as a volume, thus when you commit the container the jenkins home won't be commited. Therefore all the job configuration that is currently on the running local container won't be part of the commited image.
Your problem reduces to how would you migrate the jenkins_home volume that is on your machine, to the another machine. This problem is solved and you can find the solution here.
I would suggest however a better and more scalable approach, specifically for jenkins. The problem with the first approach, is that there is quiet some manual intervention that needs to be done whenever you want to start a similar jenkins instance on a new machine.
The solution is as follows:
Commit the container that is currently running
Copy the job configuration that is inside the container using the command: docker cp /var/jenkins_home/jobs ./jobs. This will copy the job config from the running container into your machine. Remember to clean the build folders
Create a Dockerfile that inherits from the commited image and copy the job config under the jenkins_home.
Push the image and you should have an image that you can pull and will have all the jobs configured correctly
The dockerfile will look something like:
FROM <commited-container>
COPY jobs/* /var/jenkins_home/jobs/
You need to check how the Jenkins image (hub.docker.com/r/jenkins/jenkins/) was launched on your local computer: if it was mounting a local volume, that volume should include the JENKINS_HOME with all the job configurations and plugins.
docker run -p 8080:8080 -p 50000:50000 -v jenkins_home:/var/jenkins_home jenkins/jenkins:lts
You need to export that volume too, not just the image.
See for instance "Docker & Jenkins: Data that Persists ", using a data volume container that you can then export/import.
I'm having issues getting a Jenkins pipeline script to work that uses the Docker Pipeline plugin to run parts of the build within a Docker container. Both Jenkins server and slave run within Docker containers themselves.
Setup
Jenkins server running in a Docker container
Jenkins slave based on custom image (https://github.com/simulogics/protokube-jenkins-slave) running in a Docker container as well
Docker daemon container based on docker:1.12-dind image
Slave started like so: docker run --link=docker-daemon:docker --link=jenkins:master -d --name protokube-jenkins-slave -e EXTRA_PARAMS="-username xxx -password xxx -labels docker" simulogics/protokube-jenkins-slave
Basic Docker operations (pull, build and push images) are working just fine with this setup.
(Non-)Goals
I want the server to not have to know about Docker at all. This should be a characteristic of the slave/node.
I do not need dynamic allocation of slaves or ephemeral slaves. One slave started manually is quite enough for my purposes.
Ideally, I want to move away from my custom Docker image for the slave and instead use the inside function provided by the Docker pipeline plugin within a generic Docker slave.
Problem
This is a representative build step that's causing the issue:
image.inside {
stage ('Install Ruby Dependencies') {
sh "bundle install"
}
}
This would cause an error like this in the log:
sh: 1: cannot create /workspace/repo_branch-K5EM5XEVEIPSV2SZZUR337V7FG4BZXHD4VORYFYISRWIO3N6U67Q#tmp/durable-98bb4c3d/pid: Directory nonexistent
Previously, this warning would show:
71f4de289962-5790bfcc seems to be running inside container 71f4de28996233340c2aed4212248f1e73281f1cd7282a54a36ceeac8c65ec0a
but /workspace/repo_branch-K5EM5XEVEIPSV2SZZUR337V7FG4BZXHD4VORYFYISRWIO3N6U67Q could not be found among []
Interestingly enough, exactly this problem is described in CloudBees documentation for the plugin here https://go.cloudbees.com/docs/cloudbees-documentation/cje-user-guide/index.html#docker-workflow-sect-inside:
For inside to work, the Docker server and the Jenkins agent must use the same filesystem, so that the workspace can be mounted. The easiest way to ensure this is for the Docker server to be running on localhost (the same computer as the agent). Currently neither the Jenkins plugin nor the Docker CLI will automatically detect the case that the server is running remotely; a typical symptom would be errors from nested sh commands such as
cannot create /…#tmp/durable-…/pid: Directory nonexistent
or negative exit codes.
When Jenkins can detect that the agent is itself running inside a Docker container, it will automatically pass the --volumes-from argument to the inside container, ensuring that it can share a workspace with the agent.
Unfortunately, the detection described in the last paragraph doesn't seem to work.
Question
Since both my server and slave are running in Docker containers, what kid of volume mapping do I have to use to make it work?
I've seen variations of this issue, also with the agents powered by the kubernetes-plugin.
I think that for it to work the agent/jnlp container needs to share workspace with the build container.
By build container I am referring to the one that will run the bundle install command.
This could be possibly work via withArgs
The question is why would you want to do that? Most of the pipeline steps are being executed on master anyway and the actual build will run in the build container. What is the purpose of also using an agent?
I have some difficulties to configure Jenkins to run test on a dockerized application.
First here is my set up: the project is on bitbucket and I have a docker-compose that run my application which is composed of 3 three conmtainers for now (one for mongo, one for redis, one for my node app).
The webhook of bitbucket works well and Jenkins is triggered when I push.
However what i would like to do for a build is:
get a repo where my docker-compose is, run the docker-compose in order to have my cluster running, and then run a "npm test" inside the repo (my test use mocha), and finally having Jenkins notified if the test have passed or not.
If someone could help me to get this chain of operation applied by Jenkins, it would be awesome.
The simplest way is use jenkins pipeline plugin or shell script.
To build docker image and run compose you could use docker-compose command. Important thing is that you need rebuild docker image from compose level (because if you run docker-compose run only jenkins can use previous bilded image). So you need run docker-compose build before.
Your dockerfile should copy all files of your application.
Next when your service is ready you could run command in docker image using: docker exec {CONTAINER_ID} {COMMAND_TO_RUN_TESTS}.
Lets say I have a container that is fully equipped to serve a Rails app with Passenger and Apache, and I have a vhost that routes to /var/www/app/public in my container. Since a container is supposed to be sort of like a process, what would I do when my Rails code changes? If the app was cloned with Git, and there are pending changes in the repo, how can the container pull in these changes automatically?
You have a choice on how you want to structure your container, depending on your deployment philosophy:
Minimal: You install all your rails pre-reqs in the Docker file (RUN commands), but have the ENTRYPOINT be something like "git pull && bundle install --deployment && rails run". At container boot time it will get your latest code.
Snapshot: Same as above, but have the ENTRYPOINT also be a RUN command. This way, the container has a pre-installed snapshot of the code, but it will still update when the container is booted. Sometimes this can speed up boot time (i.e. if most of the gems are already installed).
Container as Deployment: Same as above, but change the ENTRYPOINT to be "rails run" only. This way, your container is your code. You'll have to make new containers every time you change rails (automation!). The advantage is that your container won't need to contact your code repo at all. The downside is that you have to always remember what the latest container is. (Tags can help) And right now, Docker doesn't have a good story on cleaning up old containers.
In this scenario, it sounds like you have built an image and are now running this image in a container.
Using the image your running container originates from, you could add another build step to git pull your most up to date code. I'd consider this an incremental update as your building upon a preexisting image. I'd recommend tagging and pushing to your (assuming your using a private index) appropriately. The new image would be available to run.
Depending on the need, you could also rebuild the base image of your software. I'm assuming your using a Dockerfile to build your original image which includes a git checkout of your software. You could then tag and push to your index for use appropriately.
In docker v0.8, It will be possible to start a new command in a running container, so you will be able to do what you want.
In the meantime, one solution would consist in using volumes.
Option 1: Docker managed volumes
FROM ubuntu
...
VOLUME ["/var/www/app/public"]
ADD host/src/path /var/www/app/public
CMD start rails
Start and run your container, then when you need to git pull, you can simply:
$ docker ps # -> retrieve the id of the running container
$ docker run -volumes-from <container id> <your image with git installed> sh -c 'cd/var/www/app/public && git pull -u'
This will result in your first running container to have the sources updated.
Option 2: Host volumes
You can start your container with:
$ docker run -v `pwd`/srcs:/var/www/app/public <yourimage>
and then simply git pull in your host's sources directory, it will update the container's sources.