Run docker-compose from Docker container - docker

I have Jenkins running inside a Docker container with docker.sock mounted. Can I call docker-compose from this container to run a service on host machine? I tried executing installation script from within a container, but it keeps saying
"no such file or directory".
docker exec jenkins curl -L "https://github.com/docker/compose/releases/download/1.23.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
docker exec jenkins chmod +x /usr/local/bin/docker-compose

It is achievable but tricky to get it right.
You need to mount with a docker volume the path the docker-compose.yml file is in on your docker host, to the exact same location in the container.
So if the docker-compose.yml file location is /home/leonid/workspace/project1/docker-compose.yml on the docker host, you need to add the volume -v /home/leonid/workspace/project1/:/home/leonid/workspace/project1/ for the jenkins container.
Then, in your Jenkins job:
cd /home/leonid/workspace/project1/
docker-compose up -d
Why is that?
Keep in mind that docker-compose gives instructions to the docker engine. The docker engine runs on the docker host (and not in the Jenkins container). So any path given by docker-compose to the docker engine must exist on the docker host.

Create your own dockerfile that is based on image you use for build (probably docker:latest)
Then, in RUN line put downloading the docker-compose and setting it as executable.
Spin up jenkins agent to build from your image instead default one.
you need to install docker-compose on that build container, not on jenkins master.
For builds in gitlab-ci I had a special build container that based on docker image with compose installed additionally. I think this is your case - you are using jenkins to spin a container based on docker:latest which by default does not have docker-compose. You need to either create own image that is from docker:latest, install compose or use some image from docekrhub that is done like this.
Also, you could try to install compose as part of your build. Just download it to some local dir and use it from there.

The "Install as a container" section in the docs worked for me: https://docs.docker.com/compose/install/
sudo curl -L --fail https://github.com/docker/compose/releases/download/1.25.0/run.sh -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

Related

how to copy files from one docker service to another, inside of docker bash

I am trying to copy a file from one docker-compose service to another while in the service's bash environment, but I cannot seem to figure out how to do it.
Can anybody provide me with an idea?
Here is the command I am attempting to run:
(docker cp ../db_backups/latest.sqlc pgadmin_1:/var/lib/pgadmin/storage/mine/)
The error is simply:
bash: docker: command not found
There's no way to do that by default. There are a few things you could do to enable that behavior.
The easiest solution is just to run docker cp on the host (docker cp from the first container to the host, then docker cp from the host to the second container).
If it all has to be done inside the container, the next easiest solution is probably to use a shared volume:
docker run -v shared:/shared --name containerA ...
docker run -v shared:/shared --name containerB ...
Then in containerA you can cp ../db_backups/latest.sqlc /shared, and in containerB you can cp /shared/latest.sqlc /var/lib/pgadmin/storage/mine.
This is a nice solution because it doesn't require installing anything inside the container.
Alternately, you could:
Install the docker CLI inside each container, and mount the Docker socket inside each container. This would let you run your docker cp command, but it gives anything inside the container complete control of your host (because access to docker == root access).
Run sshd in the target container, set up the necessary keys, and then use scp to copy things from the first container to the second container.

Docker bind mount directory in /tmp not working

I'm trying to mount a directory in /tmp to a directory in a container, namely /test. To do this, I've run:
docker run --rm -it -v /tmp/tmpl42ydir5/:/test alpine:latest ls /test
I expect to see a few files when I do this, but instead I see nothing at all.
I tried moving the folder into my home directory and running again:
docker run --rm -it -v /home/theolodus/tmpl42ydir5/:/test alpine:latest ls /test
at which point I see the expected output. This makes me thing I have mis-configured something and/or the permissions have bitten me. Have I missed a step in installing docker? I did it via sudo snap install docker, and then configured docker to let me run as non-root by adding myself to the docker group. Running as root doesn't help...
Host machine is Ubuntu 20.04, docker version is 19.03.11
When running docker as a snap...
all files that Docker uses, such as dockerfiles, to be in $HOME.
Ref: https://snapcraft.io/docker
The /tmp filesystem simply isn't accessible to the docker engine when it's running within the snap isolation. You can install docker directly on Ubuntu from the upstream Docker repositories for a more traditional behavior.

"docker-compose: not found" in Jenkins pipeline. Tried adding path to environment

I am running Jenkins inside Docker on my DigitalOcean droplet. When my Jenkinsfile runs "docker-compose build", I am receiving
line 1: docker-compose: not found while attempting to build.
My first question is that if I mounted my volume with/var/run/docker.sock:/var/run/docker.sock in my docker-compose file would I still need to
add the CLI to my Dockerfile?
RUN curl -fsSLO https://get.docker.com/builds/Linux/x86_64/docker-17.04.0-ce.tgz \
&& tar xzvf docker-17.04.0-ce.tgz \
&& mv docker/docker /usr/local/bin \
&& rm -r docker docker-17.04.0-ce.tgz
From looking around, it seems it should be fine with just adding the volume, but mine only worked after having both.
The second question being (similar to the first) - should docker-compose be working already by now or do I need to install docker-compose in my Dockerfile as well.
I have seen
pipeline {
environment {
PATH = "$PATH:<folder_where_docker-compose_is>"
}
}
for docker-compose, is this referring to the location on my Droplet? I have tried this too but sadly that did not work either.
Mounting the docker socket into your container will only make the docker client interact with the docker engine running in the host machine running the container.
You still need to install docker & docker-compose clients in order to invoke these commands from the cli.
You need to install docker, docker-compose, make sure jenkins user is in group docker and set docker group id to docker group id on the host.
Example Dockerfile

How to copy SSH from JENKINS host into a DOCKER container?

I can't copy the file from the host into the container using the Dockerfile, because i'm simply not allowed to, as mentioned in Docker Documentation:
The path must be inside the context of the build; you cannot
COPY ../something /something, because the first step of a docker build
is to send the context directory (and subdirectories) to the docker
daemon.
I'm also unable to do so from inside jenkins job, because the job commands run inside the shell of the docker container, there is not way to talk to the parent(which is the jenkins host).
This jenkins plugin could have been a life saver, but as mentioned in the first section: distribution of this plugin has been suspended due to unresolved security vulnerabilities.
This is how I copy files from host to docker image using Dockerfile
I have a folder called tomcat
Inside that, I have a tar file and Dockerfile
Commands to do the whole process just for understanding
$ pwd
/home/user/Documents/dockerfiles/tomcat/
$ ls
apache-tomcat-7.0.84.tar.gz Dockerfile
Sample Docker file:
FROM ubuntu_docker
COPY apache-tomcat-7.0.84.tar.gz /home/test/
...
Docker commands:
$ docker build -it testserver .
$ docker run -itd --name test1 testserver
$ docker exec -it bash
Now you are inside docker container
# ls
apache-tomcat-7.0.84.tar.gz
As you can see I am able to copy apache-tomcat-7.0.84.tar.gz from host to Docker container.
Notice the Docker Documentation first line which you have shared
The path must be inside the context of the build;
So as long as the path is reachable during build you can copy.
Another way of doing this would be using volume
docker run -itd -v $(pwd)/somefolder:/home/test --name test1 testserver
Notice -v parameter
You are telling Docker to mount Current_Directory/somefolder to Docker's path at /home/test
Once the container is up and running you can simply copy any file to $(pwd)/somefolder and it will get copied
inside container at /home/test

Airflow inside docker running a docker container

I have airflow running on an EC2 instance, and I am scheduling some tasks that spin up a docker container. How do I do that? Do I need to install docker on my airflow container? And what is the next step after. I have a yaml file that I am using to spin up the container, and it is derived from the puckel/airflow Docker image
I got a simpler solution working which just requires a short Dockerfile to build a derived image:
FROM puckel/docker-airflow
USER root
RUN groupadd --gid 999 docker \
&& usermod -aG docker airflow
USER airflow
and then
docker build -t airflow_image .
docker run -v /var/run/docker.sock:/var/run/docker.sock:ro \
-v /usr/bin/docker:/bin/docker:ro \
-v /usr/lib/x86_64-linux-gnu/libltdl.so.7:/usr/lib/x86_64-linux-gnu/libltdl.so.7:ro \
-d airflow_image
Finally resolved
My EC2 setup is running unbuntu Xenial 16.04 and using a modified the puckel/airflow docker image that is running airflow
Things you will need to change in the Dockerfile
Add USER root at the top of the Dockerfile
USER root
mounting docker bin was not working for me, so I had to install the
docker binary in my docker container
Install Docker from Docker Inc. repositories.
RUN curl -sSL https://get.docker.com/ | sh
search for wrapdocker file on the internet. Copy it into scripts directory in the folder where the Dockerfile is located. This starts the docker daemon inside airflow docker
Install the magic wrapper
ADD ./script/wrapdocker /usr/local/bin/wrapdocker
RUN chmod +x /usr/local/bin/wrapdocker
add airflow as a user to the docker group so the airflow can run docker jobs
RUN usermod -aG docker airflow
switch to airflow user
USER airflow
Docker compose file or command line arguments to docker run
Mount docker socket from docker airflow to the docker image just installed
- /var/run/docker.sock:/var/run/docker.sock
You should be good to go !
You can spin up docker containers from your airflow docker container by attaching volumes to your container.
Example:
docker run -v /var/run/docker.sock:/var/run/docker.sock:ro -v /path/to/bin/docker:/bin/docker:ro your_airflow_image
You may also need to attach some libraries required by docker. This depends on the system you are running Docker on. Just read the error messages you get when running a docker command inside the container, it will indicate you what you need to attach.
Your airflow container will then have full access to Docker running on the host.
So if you launch docker containers, they will run on the host running the airflow container.

Resources