"docker-compose: not found" in Jenkins pipeline. Tried adding path to environment - docker

I am running Jenkins inside Docker on my DigitalOcean droplet. When my Jenkinsfile runs "docker-compose build", I am receiving
line 1: docker-compose: not found while attempting to build.
My first question is that if I mounted my volume with/var/run/docker.sock:/var/run/docker.sock in my docker-compose file would I still need to
add the CLI to my Dockerfile?
RUN curl -fsSLO https://get.docker.com/builds/Linux/x86_64/docker-17.04.0-ce.tgz \
&& tar xzvf docker-17.04.0-ce.tgz \
&& mv docker/docker /usr/local/bin \
&& rm -r docker docker-17.04.0-ce.tgz
From looking around, it seems it should be fine with just adding the volume, but mine only worked after having both.
The second question being (similar to the first) - should docker-compose be working already by now or do I need to install docker-compose in my Dockerfile as well.
I have seen
pipeline {
environment {
PATH = "$PATH:<folder_where_docker-compose_is>"
}
}
for docker-compose, is this referring to the location on my Droplet? I have tried this too but sadly that did not work either.

Mounting the docker socket into your container will only make the docker client interact with the docker engine running in the host machine running the container.
You still need to install docker & docker-compose clients in order to invoke these commands from the cli.

You need to install docker, docker-compose, make sure jenkins user is in group docker and set docker group id to docker group id on the host.
Example Dockerfile

Related

Storing local files in Docker Volume for sharing

I'm new to Docker, so this may be an obvious question that I'm just not using the right search terms to find an answer to, so my apologies if that is the case.
I'm trying to stand up a new CI/CD Pipeline using a purpose built container. So far, I've been using someone else's container, but I need more control over the available dependencies, so I need my own container. To that end, I've built a container (Ubuntu), and I have a local (host) directory for the dependencies, and another for the project I'm building. Both are connected to the container using Docker Volumes (-v option), like this.
docker run --name buildbox \
-v /projectpath:/home/project/ \
-v /dependencies:/home/libs \
buildImage buildScript.sh
Since this is going to eventually live in a Docker repo and be accessed by a GitLab CI/CD Pipeline, I want to store the dependencies directory in as small of a container as possible that I can push up to the Docker repo alongside my Ubuntu build container. That way I can have the Pipeline pull both containers, map the dependencies container to the build container (--volumes-from), and map the project to be built using the -v option; e.g.:
docker run --name buildbox \
-v /projectpath:/home/project/ \
--volumes-from depend_vol \
buildImage buildScript.sh
Thus, I pull buildImage and depend_vol from the Docker repo, run buildImage while attaching the dependencies container and project directory as volumes, then run the build script (and extract the build artifact when it's done). The reason I want them separate is in case I want to create different build containers that use common libraries, or if I want to create version specific dependency containers without having a full OS stored (I have plans for this).
Now, I could just start a lightweight generic container (like busybox) and copy everything into it, but I was wondering if there was simply a way to attach the volume and then store the contents in the image when the container shuts down. Everything I've seen about making a portable data store / volume starts with all the data already copied into the container.
But I want to take my local host dependencies directory and store it in a container. Is there a straightforward way to do this? Am I missing something obvious?
So this works, if not what I was hoping for, since I'm still doing a lot of file copy (just with tarballs).
# Create a tarball of the files on the host to store, don't store the full path
tar -cvf /home/projectFiles.tar -C /home/projectFiles/ .
# Start a lightweight docker container (busybox) with a volume connection to the host (/home:/backup), then extract the tarball into the container
# cd to the drive root and untar the tarball
docker run --name libraryVolume \
-v /home:/backup \
busybox \
/bin/sh -c \
"cd / && mkdir /projectLibs && tar -xvf /backup/projectFiles.tar -C /projectLibs"
# Don't forget to commit the container image
docker commit libraryVolume
That's it. Then push to the repo.
To use it, pull the repo, then start the data volume:
docker run --name projLib \
-v /projectLibs \
--entrypoint "/bin/sh" \
libraryVolume
Then start the container (projBuild) that is going to reference the data volume (projLib).
docker run --it --name projBuild \
--volumes-from=projLib \
-v /home/mySourceCode:/buildProject \
--entrypoint /buildProject/buildScript.sh \
builderImage
Seems to work.

Run docker-compose from Docker container

I have Jenkins running inside a Docker container with docker.sock mounted. Can I call docker-compose from this container to run a service on host machine? I tried executing installation script from within a container, but it keeps saying
"no such file or directory".
docker exec jenkins curl -L "https://github.com/docker/compose/releases/download/1.23.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
docker exec jenkins chmod +x /usr/local/bin/docker-compose
It is achievable but tricky to get it right.
You need to mount with a docker volume the path the docker-compose.yml file is in on your docker host, to the exact same location in the container.
So if the docker-compose.yml file location is /home/leonid/workspace/project1/docker-compose.yml on the docker host, you need to add the volume -v /home/leonid/workspace/project1/:/home/leonid/workspace/project1/ for the jenkins container.
Then, in your Jenkins job:
cd /home/leonid/workspace/project1/
docker-compose up -d
Why is that?
Keep in mind that docker-compose gives instructions to the docker engine. The docker engine runs on the docker host (and not in the Jenkins container). So any path given by docker-compose to the docker engine must exist on the docker host.
Create your own dockerfile that is based on image you use for build (probably docker:latest)
Then, in RUN line put downloading the docker-compose and setting it as executable.
Spin up jenkins agent to build from your image instead default one.
you need to install docker-compose on that build container, not on jenkins master.
For builds in gitlab-ci I had a special build container that based on docker image with compose installed additionally. I think this is your case - you are using jenkins to spin a container based on docker:latest which by default does not have docker-compose. You need to either create own image that is from docker:latest, install compose or use some image from docekrhub that is done like this.
Also, you could try to install compose as part of your build. Just download it to some local dir and use it from there.
The "Install as a container" section in the docs worked for me: https://docs.docker.com/compose/install/
sudo curl -L --fail https://github.com/docker/compose/releases/download/1.25.0/run.sh -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

Can't get Docker outside of Docker to work with Jenkins in ECS

I am trying to get the Jenkins Docker image deployed to ECS and have docker-compose work inside of my pipeline.
I have hit wall after wall trying to get this Jenkins container launched and functioning. Most of the issues have just been getting the docker command to work inside of a pipeline (including getting the permissions/group right).
I've gotten to the point that the command works and uses the host docker socket (docker ps outputs the jenkins container and ecs agent) and docker-compose is working (docker-compose --version works) but when I try to run anything that involves files inside the pipeline, I get a "no such file or directory" error. This happens when I run docker-compose -f docker-compose.testing.yml up -d --build (it can't find the yml file) and also when I try to run a basic docker build, it can't find local files used in the COPY command (ie. COPY . /app). I've tried from changing the command to be ./file.yml and $PWD/file.yml and still getting the same error.
Here is my Jenkins Dockerfile:
FROM jenkins/jenkins:lts
USER root
RUN apt-get update && \
apt-get install -y --no-install-recommends curl
RUN apt-get remove docker
RUN curl -sSL https://get.docker.com/ | sh
RUN curl -L --fail https://github.com/docker/compose/releases/download/1.21.2/run.sh -o /usr/local/bin/docker-compose
RUN chmod +x /usr/local/bin/docker-compose
RUN groupadd -g 497 dockerami \
&& usermod -aG dockerami jenkins
USER jenkins
COPY plugins.txt /usr/share/jenkins/ref/plugins.txt
COPY jobs /app/jenkins/jobs
COPY jenkins.yml /var/jenkins_home/jenkins.yml
RUN xargs /usr/local/bin/install-plugins.sh < /usr/share/jenkins/ref/plugins.txt
RUN echo 2.0 > /usr/share/jenkins/ref/jenkins.install.UpgradeWizard.state
ENV CASC_JENKINS_CONFIG /var/jenkins_home/jenkins.yml
I also have Terraform building the task definition and binding the /var/run/docker.sock from the host to the jenkins container.
I'm hoping to get this working since I have liked Jenkins since we started using it about 2 years ago and I've had these pipelines working with docker-compose in our non-containerized Jenkins install, but getting Jenkins containerized so far has been pulling teeth. I would much prefer to get this working than to have to change my workflows right now to something like Concourse or Drone.
One issue that you have in your Dockerfile is that you are copying a file into /var/jenkins_home which will disappear as /var/jenkins_home is defined as a volume in the parent Jenkins image and any files you copy into a volume after the volume has been declared are discarded - see https://docs.docker.com/engine/reference/builder/#notes-about-specifying-volumes

Running docker-compose in a Dockerfile

basically what I'm wanting to do is run Drone CI on a Now instance.
Now accepts a Dockerfile when deploying, but not a [docker-compose.yml file](issue number), drone is configured using a docker-compose.yml file.
Basically I'm wanting to know whether you can run a docker-compose.yml file as part of a Dockerfile and how this is setup, currently I've been trying something like this:
FROM docker:latest
# add the docker-compose.yml file to the current working directory
WORKDIR /
ADD . /
# install docker-compose
RUN \
apk add --update --no-cache python3 && \
pip3 install docker-compose
RUN docker-compose up
and various variations of the above in my attempts to get something up and running, in the above case it is complaining about the docker daemon not running
Any help greatly appreciated, other solutions that acheive the above end result also welcomed
Dockerfile is creating docker container and in that container you are using docker-compose
you dont have docker daemon running inside docker container
docker compose also needs to be installed
refer this doc https://devopscube.com/run-docker-in-docker/ to use docker in docker

How to run a Docker host inside a Docker container?

I have a Jenkins container running inside Docker and I want to use this Jenkins container to spin up other Docker containers when running integration tests etc.
So my plan was to install Docker in the container but this doesn't seem to work so well for me. My Dockerfile looks something like this:
FROM jenkins
MAINTAINER xxxx
# Switch user to root so that we can install apps
USER root
RUN apt-get update
# Install latest version of Docker
RUN apt-get install -y apt-transport-https
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
RUN sh -c "echo deb https://get.docker.com/ubuntu docker main > /etc/apt/sources.list.d/docker.list"
RUN apt-get update
RUN apt-get install -y lxc-docker
# Switch user back to Jenkins
USER jenkins
The jenkins image is based on Debian Jessie. When I start bash terminal inside container based on the generated image and do for example:
docker images
I get the following error message:
FATA[0000] Get http:///var/run/docker.sock/v1.16/images/json: dial unix /var/run/docker.sock: no such file or directory. Are you trying to connect to a TLS-enabled daemon without TLS?
I suspect that this could be because the docker service is not started. But my next problem arise when I try to start the service:
service docker start
This gives me the following error:
mount: permission denied
I've tracked the error in /etc/init.d/docker to this line:
mount -t tmpfs -o uid=0,gid=0,mode=0755 cgroup /sys/fs/cgroup
So my questions are:
How do I actually start a Docker host inside a container? Or is this
something that should be avoided?
Is there something special I need to do if I'm running Mac and boot2docker?
Perhaps I should instead link to the Docker on the host machine as described here?
Update: I've tried the container as user root and jenkins. sudo is not installed.
A simpler alternative is to mount the docker socket and create sibling containers. To do this, install docker on your image and run something like:
docker run -it -v /var/run/docker.sock:/var/run/docker.sock myimage
In the container you should now be able to run docker commands as if you were on the host. The advantage of this method is that you don't need --privileged and get to use the cache from the host. The disadvantage is that you can see all running containers, not just the ones the created from the container.
1.- The first container you start (the one you launch other one inside) must be run with the --privileged=true flag.
2.- I think there is not.
3.- Using the privileged flag you don't need to mount the docker socket as a volume.
Check this project to see an example of all this.

Resources