Jenkins sidecar approach - running multiple containers inside main container - jenkins

I 'm trying to replicate the sidecar approach in Jenkins from here: https://www.jenkins.io/doc/book/pipeline/docker/#running-sidecar-containers but I 'm trying to run multiple containers inside.
my code looks something like this:
PG_IMG = 'custom_postgres:dev'
PG_NAME = 'db'
script.docker.withRegistry('https://index.docker.io/v1/', 'dockerHub'){
script.docker.image(PG_IMG).withRun('-p 5432:5432'){c ->
script.docker.image(PG_IMG).inside("--link ${c.id}:${PG_NAME}"){
script.sh("echo ======= db ======")
}
script.docker.image('redis').inside("-d --link ${c.id}:${PG_NAME}"){
script.sh("echo ======= redis ======")
}
script.docker.image('python:3.6').inside("-u root -v /var/run/docker.sock:/var/run/docker.sock --link ${c.id}:'${PG_NAME}'") {
script.sh("apt-get update -qq && apt-get install curl -y && curl --silent -SL https://get.docker.com/ | sh")
script.sh("docker ps -a")
}
}
}
I 'm then trying to list all the containers that are running the in VM via python container by setting up docker.sock, but the 'docker ps -a' command output shows only two containers as up and running and there is no info about redis image (not even in exited state) but from Jenkins logs I could see that redis image is getting pulled.
I 'm not able get what I 'm missing here and why redis container isn't listed in 'docker ps -a' command.
Any help and suggestions will be really helpful

Related

Checking external Processes from inside a docker container [duplicate]

There are two containers A and B. Once container A starts, one process will be executed, then the container will stop. Container B is just an web application (say expressjs). Is it possible to kickstart A from container B ?
It is possible to grant a container access to docker so that it can spawn other containers on your host. You do this by exposing the docker socket inside the container, e.g:
docker run -v /var/run/docker.sock:/var/run/docker.sock --name containerB myimage ...
Now, if you have the docker client available inside the container, you will be able to control the docker daemon on your host and use that to spawn your "container A".
Before trying this approach, you should be aware of the security considerations: access to docker is the same as having root access on the host, which means if your web application has a remote compromise you have just handed the keys to your host to the attackers. This is described more fully in this article.
It is possible by mounting the docker socket.
Container A
It will print the time to the stdout (and its logs) and exit.
docker run --name contA ubuntu date
Container B
The trick is to mount the host's docker socket then install the docker client on the container. It will then interact with the daemon just as if you were using docker from the host. Once docker is installed, it simply restart container A every 5 seconds.
docker run --name contB -v /var/run/docker.sock:/var/run/docker.sock ubuntu bash -c "
apt-get update && apt-get install -y curl &&
curl -sSL https://get.docker.com/ | sh &&
watch --interval 5 docker restart contA"
You can see that contA is being called by looking at its logs
docker logs contA
That said, Docker is really meant for long running services. There's some talk over at the Docker github issues about specifying short lived "job" services for things like maintenance, cron jobs, etc, but nothing has been decided, much less coded. So it's best to build your system so that containers are up and stay up.
docker-compose.yml (credits to larsks)
# ...
volumes:
- /var/run/docker.sock:/var/run/docker.sock
# ...
Dockerfile (credits to Aaron V)
# ...
ENV DOCKERVERSION=19.03.12
RUN curl -fsSLO https://download.docker.com/linux/static/stable/x86_64/docker-${DOCKERVERSION}.tgz \
&& tar xzvf docker-${DOCKERVERSION}.tgz --strip 1 -C /usr/local/bin docker/docker \
&& rm docker-${DOCKERVERSION}.tgz
# ...
Node.js index.js (credits to Arpan Abhishek, Maulik Parmar and anishsane)
# ...
const { exec } = require("child_process");
# ...
exec('docker container ls -a --format "table {{.ID}}\t{{.Names}}" | grep <PART_OF_YOUR_CONTAINER_NAME> | cut -d" " -f1 | cut -f1 | xargs -I{} docker container restart -t 0 {}', (error, stdout, stderr) => {
if (error) {
console.log(`error: ${error.message}`);
return;
}
if (stderr) {
console.log(`stderr: ${stderr}`);
return;
}
console.log(`stdout: ${stdout}`);
});
# ...
Please make sure that your application is at least behind a password protection. Exposing docker.sock in any way is a security thing.
Here you can find other Docker client versions: https://download.docker.com/linux/static/stable/x86_64/
Please replace <PART_OF_YOUR_CONTAINER_NAME> with a part of your container name.

Bitbucket pipelines/Docker : Connection refused

I am trying to configure a bitbucket CI pipeline to run tests.Stripping out the details I have a make file which looks as follows to run some form of integration tests.
test-e2e:
docker-compose -f ${DOCKER_COMPOSE_FILE} up -d ${APP_NAME}
godog
docker-compose -f ${DOCKER_COMPOSE_FILE} down
Docker compose is a single webserver with ports exposed.
Pipeline looks as follows:
- step: &integration-testing
name: Run integration tests script: # do this to make go module work with private repo
- apk add libc-dev py-pip python-dev libffi-dev openssl-dev gcc libc-dev make bash
- pip install docker-compose - git config --global url."git#bitbucket.org:".insteadOf "https://bitbucket.org/"
- go get github.com/onsi/ginkgo/ginkgo
- go get github.com/onsi/gomega/...
- go get github.com/DATA-DOG/godog/cmd/godog
- make build-only && make test-e2e
I am facing two separate issues for both i have not been able to find a solution.
Keep getting connection refused when the tests are run.
To elaborate above, the docker compose brings up a server with proper host:port mapping ("127.0.0.1:10077:10077"). The command godog is intended to run the tests by querying the server. This however always ends in connection refused.This link has a possible solution , so i am exploring that.
The pipeline almost always runs commands before the container is up. I've tried fixing this by changing the invoke to.
test-e2e:
docker-compose -f ${DOCKER_COMPOSE_FILE} up -d ${APP_NAME} && sleep 10 && docker exec -i oracle-go godog && docker-compose -f ${DOCKER_COMPOSE_FILE} down
However the container is always brought up after the sleep (almost instantaneously).
Example:
Creating oracle-go ...
Sleep 10
docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
docker exec -i oracle-go godog
Creating oracle-go ... done
Error response from daemon: Container 7bab5322203756b972e7f0a3c6e5827413279914a68c705221b8af7daadc1149 is not running
Please let me know if there is a way around it.
If I understood your question correctly, you want to wait for the server to start before running tests.
Instead of manually sleeping, you should use wait-for-it.sh (or an alternative). See the relevant Docker docs for more information.
For example:
test-e2e:
bash wait-for-it.sh <HOST>:<PORT> -- docker-compose -f ${DOCKER_COMPOSE_FILE} up -d ${APP_NAME} && docker exec -i oracle-go godog && docker-compose -f ${DOCKER_COMPOSE_FILE} down
Change <HOST> and <PORT> to your service's host name and port respectively. Alternatively, you could use wait-for-it.sh in your Docker Compose command or the like.

Extending CouchDB Docker image

I’m trying to extend CouchDB docker image to pre-populate CouchDB (with initial databases, design documents, etc.).
In order to create a database named db, I first tried this initial Dockerfile:
FROM couchdb
RUN curl -X PUT localhost:5984/db
but the build failed since couchdb service is not yet started at build time. So I changed it into this:
FROM couchdb
RUN service couchdb start && \
sleep 3 && \
curl -s -S -X PUT localhost:5984/db && \
curl -s -S localhost:5984/_all_dbs
Note:
the sleep was the only way I found to make it work, since it did not work with curl option --connect-timeout,
the second curl is only to check that the database was created.
The build seems to work fine:
$ docker build . -t test3 --no-cache
Sending build context to Docker daemon 6.656kB
Step 1/2 : FROM couchdb
---> 7f64c92d91fb
Step 2/2 : RUN service couchdb start && sleep 3 && curl -s -S -X PUT localhost:5984/db && curl -s -S localhost:5984/_all_dbs
---> Running in 1f3b10080595
Starting Apache CouchDB: couchdb.
{"ok":true}
["db"]
Removing intermediate container 1f3b10080595
---> 7d733188a423
Successfully built 7d733188a423
Successfully tagged test3:latest
What is weird is that now when I start it as a container, database db does not seem to be saved into test3 image:
$ docker run -p 5984:5984 -d test3
b34ad93f716e5f6ee68d5b921cc07f6e1c736d8a00e354a5c25f5c051ec01e34
$ curl localhost:5984/_all_dbs
[]
Most of the standard Docker database images include a VOLUME line that prevents creating a derived image with prepopulated data. For the official couchdb image you can see the relevant line in its Dockerfile. Unlike the relational-database images, this image doesn’t have any support for scripts that run at first startup.
That means you need to do the initialization from the host or from another container. If you can directly interact with it using its HTTP API, then this could look like:
# Start the container
docker run -d -p 5984:5984 -v ... couchdb
# Wait for it to be up
for i in $(seq 20); do
if curl -s http://localhost:5984 >/dev/null 2>&1; then
break
fi
sleep 1
done
# Create the database
curl -XPUT http://localhost:5984/db

How can I call docker daemon of the host-machine from a container?

Here is exactly what I need. I already have a project which is starting up a particular set of docker images and it works completely fine.
But I want to create another image, which is particularly to build this project from the scratch having all the dependencies inside. So, the problem is, when building, to create docker images, we need to access the docker daemon running on the host machine from the building container.
Is there any way of doing this?
If you need to access docker on the host from inside a container, you can simply expose the Docker socket inside the container using a host mount (-v /host/path:/container/path on the docker run command line).
For example, if I start a new fedora container exposing the docker socket on my host:
$ docker run -it -v /var/run/docker.sock:/var/run/docker.sock fedora bash
Then install docker inside the container:
[root#d28650013548 /]# yum -y install docker
...many lines elided...
I can now talk to docker on my host:
[root#d28650013548 /]# docker info
Containers: 6
Running: 1
Paused: 0
Stopped: 5
Images: 530
Server Version: 17.05.0-ce
...
You can let the container access to the host's docker daemon through the docker socket and "tricking" it to have the docker executable inside the container without installing docker inside it. Just on this way (with an Ubuntu-Xenial container for the example):
docker run --name dockerInsideContainer -ti -v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):/usr/bin/docker ubuntu:xenial
Inside this, you can launch any docker command like for example docker images to check it's working.
If you see an error like this: docker: error while loading shared libraries: libltdl.so.7: cannot open shared object file: No such file or directory you should install inside the container a package called libltdl7. So for example you can create a Dockerfile for the container or installing it directly on run:
FROM ubuntu:xenial
apt update
apt install -y libltdl7
or
docker run --name dockerInsideContainer -ti -v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):/usr/bin/docker ubuntu:xenial bash -c "apt update && apt install libltdl7 && bash"
Hope it helps

Is it possible to start a stopped container from another container

There are two containers A and B. Once container A starts, one process will be executed, then the container will stop. Container B is just an web application (say expressjs). Is it possible to kickstart A from container B ?
It is possible to grant a container access to docker so that it can spawn other containers on your host. You do this by exposing the docker socket inside the container, e.g:
docker run -v /var/run/docker.sock:/var/run/docker.sock --name containerB myimage ...
Now, if you have the docker client available inside the container, you will be able to control the docker daemon on your host and use that to spawn your "container A".
Before trying this approach, you should be aware of the security considerations: access to docker is the same as having root access on the host, which means if your web application has a remote compromise you have just handed the keys to your host to the attackers. This is described more fully in this article.
It is possible by mounting the docker socket.
Container A
It will print the time to the stdout (and its logs) and exit.
docker run --name contA ubuntu date
Container B
The trick is to mount the host's docker socket then install the docker client on the container. It will then interact with the daemon just as if you were using docker from the host. Once docker is installed, it simply restart container A every 5 seconds.
docker run --name contB -v /var/run/docker.sock:/var/run/docker.sock ubuntu bash -c "
apt-get update && apt-get install -y curl &&
curl -sSL https://get.docker.com/ | sh &&
watch --interval 5 docker restart contA"
You can see that contA is being called by looking at its logs
docker logs contA
That said, Docker is really meant for long running services. There's some talk over at the Docker github issues about specifying short lived "job" services for things like maintenance, cron jobs, etc, but nothing has been decided, much less coded. So it's best to build your system so that containers are up and stay up.
docker-compose.yml (credits to larsks)
# ...
volumes:
- /var/run/docker.sock:/var/run/docker.sock
# ...
Dockerfile (credits to Aaron V)
# ...
ENV DOCKERVERSION=19.03.12
RUN curl -fsSLO https://download.docker.com/linux/static/stable/x86_64/docker-${DOCKERVERSION}.tgz \
&& tar xzvf docker-${DOCKERVERSION}.tgz --strip 1 -C /usr/local/bin docker/docker \
&& rm docker-${DOCKERVERSION}.tgz
# ...
Node.js index.js (credits to Arpan Abhishek, Maulik Parmar and anishsane)
# ...
const { exec } = require("child_process");
# ...
exec('docker container ls -a --format "table {{.ID}}\t{{.Names}}" | grep <PART_OF_YOUR_CONTAINER_NAME> | cut -d" " -f1 | cut -f1 | xargs -I{} docker container restart -t 0 {}', (error, stdout, stderr) => {
if (error) {
console.log(`error: ${error.message}`);
return;
}
if (stderr) {
console.log(`stderr: ${stderr}`);
return;
}
console.log(`stdout: ${stdout}`);
});
# ...
Please make sure that your application is at least behind a password protection. Exposing docker.sock in any way is a security thing.
Here you can find other Docker client versions: https://download.docker.com/linux/static/stable/x86_64/
Please replace <PART_OF_YOUR_CONTAINER_NAME> with a part of your container name.

Resources