Jenkins Pipelining using Containers - docker

I'm trying to setup Pipelining using Jenkins. However, my Jenkins instance itself is a container. My goal is to run each layer of my application (frontend, backend, database) using docker, but I don't want to run docker within docker.
Does it make sense to convert Jenkins from a container to a VM? Or is there a way to overcome the docker within docker inception problem?
Any thoughts would be greatly appreciated.

You should use docker out of docker rather than docker in docker, there's a great article about that by one of docker's creator here: https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/.
This is what I am using and it works pretty well.
There is a gotcha with this: Your bind mount would be relative to host filesystem not the jenkins containers filesystem, thus I recommend having jenkins_home a bind mount rather than a named volume, and having the bind mount in the same path on the host and in the container, as jenkins will generate path to files relative to workspace (which is within jenkins_home usually).

Or is there a way to overcome the docker within docker inception problem?
You can use container orchestration tool Kubernetes or Mesos.

Related

access to docker shell inside one of containers

I'm running a project in a container on my machine. this project needs to list other containers on machine. previously this project was on machine (not on a container in machine) and it was possible to do that. but now it's in one of those containers. I want to know is it possible to create an access for this type of jobs (listing containers, stop/start/... them or any other works on other containers or the host machine)?
if it's true, how can it be possible?
You can use so-called docker-in-docker technique. But before starting with it, you are obligated to read the post:
http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
which is the best explanation of pros and cons.
All you have is to export /var/run/docker.sock to your container and setup docker-cli inside the container. It will give you docker access inside container, at the same time you will be adressing your host's docker engine.

Volumes in Docker inside Docker?

I am running buildbot which is a CI tool on an EC2 machine. It's currently running as docker containers one for buildbot master and one for buildbot worker. Inside buildbot worker, I have to again run docker for building images and running containers.
After doing some research on how to best do this, I have mounted the docker sock file from the host machine to the buildbot worker container. Now from inside the buildbot worker, I am able to connect to the host docker daemon and use the build cache.
Main problem now is that inside the buildbot worker, I have a docker compose file in which for one service, I am mounting a file like this
./configs/my.cnf:/etc/my.cnf
but it is failing. And doing some more research, it's because the configs/my.cnf is relative to the buildbot worker directory and since I am using the host docker daemon which resolves the files using the host paths, it is not able to find the file.
I am not able to figure out on how to best do this. There were some suggestions on using the data volumes for this, but I am not sure on how best to use those.
Any idea on how we can do this?
Do you have any control over the creation of the buildbot worker? Can you control the buildbot worker directory.
export BUILD_BOT_DIR=$(mktemp -d) &&
docker container create -v /var/run/docker.sock:/var/run/docker.sock -v ${BUILD_BOT_DIR}:${BUILD_BOT_DIR} -e BUILD_BOT_DIR ...
In this scenario, the path './configs/my:conf' points to the same file on both the container and the host.

How to mount OpenStack container in a Docker container

I am new to OpenStack. I saw there is a feature called containers in OpenStack. I think those containers are not the same thing as Docker containers. I have understood OpenStack containers are just file storage (volume?) Right or wrong?
But is there a way to mount an OpenStack container in a Docker container?
I want to have a Docker container which contains only "system files" (/bin, /usr, apache, mysql) and to put all my configuration files and PHP files in an OpenStack container.
Containers in OpenStack are applied to the Object Storage (OpenStack SWIFT). Swift is the OpenStack-counterpart of AWS S3. What you call "Bucket" in S3, is a "container" in SWIFT.
Nevertheless, OpenStack is including "docker-container" support trough two projects:
Nova-Docker (Compute component in NOVA, wotking at "hypervisor" level, but whith docker instead of KVM/QEMU/Libvirt)
OpenStack Magnum: Docker-container orchestration solution for OpenStack.
You can see any of those projects at:
Magnum: https://wiki.openstack.org/wiki/Magnum
Nova-docker: https://github.com/openstack/nova-docker

Where should live docker volumes on the host?

On the host side, should all the mount points be located in the same location? Or should they reflect the locations which are inside the containers?
For example, what is the best place to mount /var/jenkins_home on the host side in order to be consistent with its Unix filesystem?
/var/jenkins_home
/srv/jenkins_home
/opt/docker-volumes/jenkins/var/jenkins_home
Other location ?
It absolutely depends on you where you want to mount the volume on the host. Just don't map it to any system file locations.
In my opinion the volumes reflecting the locations inside the container is not a great idea since you will have many containers, and all will have similar file system structure, so you will never be able to isolate container writes.
With jenkins, since the official Jenkins docker image runs with user "jenkins", it will be not a bad idea for you to create jenkins user on the host and map /home/jenkins on the host to /var/jenkins_home on the container.
Rather than using explicit host:container mounts, consider using named volumes. This has several benefits:
They can be shared easily into other containers
They are host-agnostic (if you don't have the specific mount on that machine, it will fail)
They can be managed as first-class citizens in the Docker world (docker volume)
You don't have to worry about where to put them on your host ;)

How to expose host machine aliases to Docker container?

Docker has great documentation on linking containers - allowing one container to make use of the other container's environment variables.
However, how would one go about exposing command line aliases (and their respective programs) of the host machine to the Docker container?
Or, perhaps the better way to go about this is to simply configure the Docker container to build from an image that has these aliases / "dotfiles" built in?
I don't think that you approach to docker as you should. A docker container's purpose is to run a network application and expose it to outside world.
If you need aliases for your application running inside a container, then you have to build an image first, that contains the whole environment your app needs...
Or specify them in the Dockerfile, while building your image.

Resources