nested docker setup: child exposes parent - docker

I' have the following docker setup:
Ubuntu server
-> running Jenkins (started with docker-compose)
-> running a pipeline which starts a node-alpine image
-> which then calls a new docker-compose up (needed for tests)
If I call docker ps from the node-alpine container, I see all the containers from the ubuntu server. I would have expected to only see the newly started containers.
Is this an indication that my setup is flawed? Or just the way docker works?

That's just the way Docker works. There's no such thing as a hierarchy of containers.
Typically with setups like this you give an orchestrator (like Jenkins) access to the host's Docker socket. That means containers launched from Jenkins are indistinguishable from containers launched directly from the host, and it means that Jenkins can do anything with Docker that you could have done from the host.
Remember that access to the Docker socket means reading and modifying arbitrary files as root on the host, along with starting, stopping, deleting, and replacing other containers. In this setup you might re-evaluate how much you really need that lowest level container to start further containers, since it is a significant security exposure.

Related

Should we use supervisors to keep processes running in Docker containers?

I'm using Docker to run a java REST service in a container. If I were outside of a container then I might use a process manager/supervisor to ensures that the java service restarts if it encounters a strange one-off error. I see some posts about using supervisord inside of containers but it seems like they're focused mostly on running multiple services, rather than just keeping one up.
What is the common way of managing services that run in containers? Should I just be using some built in Docker stuff on the container itself rather than trying to include a process manager?
You should not use a process supervisor inside your Docker container for a single-service container. Using a process supervisor effectively hides the health of your service, making it more difficult to detect when you have a problem.
You should rely on your container orchestration layer (which may be Docker itself, or a higher level tool like Docker Swarm or Kubernetes) to restart the container if the service fails.
With Docker (or Docker Swarm), this means setting a restart policy on the container.

Best way to setup Jenkins with multiple Docker containers?

All,
I've searched this high and low but was not able to find a reliable answer. The question may be simple for some Pro's but please help me with this...
We have a situation where we need Jenkins to be able to access and build within Docker containers. The target Docker containers are built and instantiated with a separate docker-compose. What would be the best way of connecting Jenkins with Docker containers in each scenario's as below?
Scenario 1 : Jenkins is setup on host machine it self. 2 Docker containers instantiated using their own docker-compose file. How can Jenkins connect to the containers in this situation? Host cannot ping Docker containers since both are on different networks (Host on Physical and Docker containers on docker DNS) hence maybe no SSH as well?
Scenario 2 : We prefer Jenkins to be in its own container (with its own docker-compose) so as we can replicate the same on to other environments. How can Jenkins connect to the containers in this situation? Jenkins container cannot ping Docker containers even though I use the same network in both docker-compose files. Instead it creates additional bridge network on its own e.g. from 2nd scenario below, if I have network-01 in Docker-Compose 01 and if I mention the same in Docker-Compose 2, docker creates additional network for Compose 2. As a result, I cannot ping the Node/Mongo containers from the Jenkins container (so I guess no SSH either).
Note 1 : I'm exposing 22 on both docker images i.e. Node & Mongo...
Note 2 : Our current setup has Jenkins on the host machine with exposed docker volumes from the container to the host. Is this preferred approach?
Am I missing the big elephant in the room or the solution is complicated (should'nt be!)?

How to run a bash script in the host from a docker container and get the result

I have a Jenkins that is running inside of a docker container. Outside of the Docker container in the host, I have a bash script that I would like to run from a Jenkins pipeline inside of the container and get the result of the bash script.
You can't do that. One of the major benefits of containers (and also of virtualization systems) is that processes running in containers can't make arbitrary changes or run arbitrary commands on the host.
If managing the host in some form is a major goal of your task, then you need to run it directly on the host, not in an isolation system designed to prevent you from doing this.
(There are ways to cause side effects like this to happen: if you have an ssh daemon on the host, your containerized process could launch a remote command via ssh; or you could package whatever command in a service triggered by a network request; but these are basically the same approaches you'd use to make your host system manageable by "something else", and triggering it from a local Docker container isn't different from triggering it from a different host.)

Rancher Performance (Docker in Docker?)

Looking at Rancher, what is the performance like? I guess my main question, is everything deployed in Rancher docker in docker? After reading http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/ I trying to stay away from that idea. It looks like the Rancher CI pipeline with Docker/Jenkins is docker in docker, but what about the rest? If i setup a docker-compose or deploy something from their catalog, is it all docker in docker? I've read through their documentation and this simple question has still just flown over my head. Any guidance would be much appreciated.
Thank you
Rancher itself is not deployed with Docker in Docker (DinD). The main components of Rancher, rancher/server and rancher/agent are both normal containers. The server, in a normal deployment, runs the orchestration piece and a few other key services for the catalog, Docker Machine provisioning, websocket-proxy and MySQL. All of these can be broken out if desired, but for simplicity of getting started, its all in one. We use s6 to manage the orchestration and database processes.
The rancher/agent container is privileged and requires the user to bind mount the hosts Docker socket. We package a Docker binary in the container and use it to communicate with the host on startup. It is similar to the way a Mac talks to Boot2docker, the binary is just a client talking to a remote Docker daemon. Once the agent is bootstrapped, it communicates back to the Rancher server container over a websocket connection. When containers and stacks are deployed Rancher server sends events to the agents which then call the hosts Docker daemon for deployment. The deployed containers are running as normal Docker containers on the host, just as if the user typed docker run .... In fact, a neat feature of Rancher is that if you do type docker run ... on the host, the resulting container will show up in the Rancher UI.
The Jenkins entry in the Rancher catalog, when using the Swarm plugin is doing a host bind mount of the Docker socket as well. We have some early experiments that used DinD to test out some concepts with Jenkins, but those were not released.

Deploying changes to Docker and its containers on the fly

Brand spanking new to Docker here. I have Docker running on a remote VM and am running a single dummy container on it (I can verify the container is running by issuing a docker ps command).
I'd like to secure my Docker installation by giving the docker user non-root access:
sudo usermod -aG docker myuser
But I'm afraid to muck around with Docker while any containers are running in case "hot deploys" create problems. So this has me wondering, in general: if I want to do any sort of operational work on Docker (daemon, I presume) while there are live containers running on it, what do I have to do? Do all containers need to be stopped/halted first? Or will Docker keep on ticking and apply the updates when appropriate?
Same goes for the containers themselves. Say I have a myapp-1.0.4 container deployed to a Docker daemon. Now I want to deploy myapp-1.0.5, how does this work? Do I stop 1.0.4, remove it from Docker, and then deploy/run 1.0.5? Or does Docker handle this for me under the hood?
if I want to do any sort of operational work on Docker (daemon, I presume) while there are live containers running on it, what do I have to do? Do all containers need to be stopped/halted first? Or will Docker keep on ticking and apply the updates when appropriate?
Usually, all containers are stopped first.
That happen typically when I upgrade docker itself: I find all my container stopped (except the data containers, which are just created, and remain so)
Say I have a myapp-1.0.4 container deployed to a Docker daemon. Now I want to deploy myapp-1.0.5, how does this work? Do I stop 1.0.4, remove it from Docker, and then deploy/run 1.0.5? Or does Docker handle this for me under the hood?
That depend on the nature and requirements of your app: for a completely stateless app, you could even run 1.0.5 (with different host ports mapped to your app exposed port), test it a bit, and stop 1.0.4 when you think 1.0.5 is ready.
But for an app with any kind of shared state or resource (mounted volumes, shared data container, ...), you would need to stop and rm 1.0.4 before starting the new container from 1.0.5 image.
(1) why don't you stop them [the data containers] when upgrading Docker?
Because... they were never started in the first place.
In the lifecycle of a container, you can create, then start, then run a container. But a data container, by definition, has no process to run: it just exposes VOLUME(S), for other container to mount (--volumes-from)
(2) What's the difference between a data/volume container, and a Docker container running, say a full bore MySQL server?
The difference is, again, that a data container doesn't run any process, so it doesn't exit when said process stops. That never happens, since there is no process to run.
The MySQL server container would be running as long as the server process doesn't stop.

Resources