How do I run Docker in Docker on Heroku? - docker

Why?
I'm trying to create a general purpose solution for running docker-compose on Heroku. I want to make a one click deployment solution through the use of Heroku Button deployment. This way, a user does not need any knowledge of git, Heroku cli and docker.
The problem.
Docker and the docker daemon are only available when I set the stack to container. There are buildpacks that give you docker and docker-compose CLI but without the docker daemon you cannot run the docker image. So buildpacks won't work.
With the stack set to container I can use the file heroku.yml (article). In there I define my processes. (It replaces Procfile. If I still add a Procfile to my project it will do nothing.)
I can also define a Dockerfile there to build my docker image.
When I however run the docker image the following error pops up:
2019-02-28T15:32:48.462101+00:00 app[worker.1]: Couldn't connect to Docker daemon at http+docker://localhost - is it running?
2019-02-28T15:32:48.462119+00:00 app[worker.1]:
2019-02-28T15:32:48.462122+00:00 app[worker.1]: If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable.
The problem is inside the Docker container the Docker daemon is not running. The solution to this is to mount it:
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
And since you cannot use Procfile I cannot run that command. (See above heroku.yml replaces Procfile.) And if I was using a buildpack I could use Procfile but the docker daemon wouldn't be running.....
I tried defining a VOLUME within the Dockerfile and the problem persists. Furthermore a heroku article says "Volume mounting is not supported. The filesystem of the dyno is ephemeral."
On Heroku it is possible to run a docker image. What I am struggling at is running a docker in docker image.
Running a docker in docker image works fine on my VPS by mounting /var/run/docker.sock but this cannot(?) be done on Heroku.
Last words:
I'm trying to make this work so that other people can easily deploy software solution even though they are not comfortable with git, heroku cli and docker.

Unfortunately the answer to your question is: not yet.
For securiy reasons Heroku does not provide to the users the ability to run priviledged containers because the container could access to host capabilities.
The documentation is pretty clear about your limitations, e.g: No --priviledged container and no root user either, no VOLUMES and disk is ephemeral.
After playing with DinD images for your concern, I came to the conclusion that trying to run Docker inside a Heroku container is not the right choice and design.
I am pretty sure what you are trying to achieve is close to what Heroku is offering to the users. Offering a platform or an application where non-developper can push and deploy applications with just a button can be very interesting in various ways. And it can be done with an application using their Platform API.
In this situation a Web application (running on Heroku) may not (up to my knowledge) be able to do what you want. Instead you need to embed in a Desktop application: git, docker, and your app for parsing, verifying, building and pushing your applications/components to Heroku's container registry.
In the end, if you still think what you need a DinD solution, well, your primary solution to use a VPS is the only solution for the moment. But be aware that it may open security vulnerabilities to your system and you may arrive to offer something similar to Heroku's offer when trying to limit those security doors.

I don't think you can run a service on Heroku that can use the docker command to start some docker container.
I want to make a one click deployment solution through the use of Heroku Button deployment.
I think you can update the reference of the Deploy button to some of your automation servers (ex: an instance Jenkins already deploy on Heroku/another cloud) to trigger the deploy pipeline and don't let people interact with git/docker, etc.
But yes, you have to deal with a lot of problems like security, parameter. when not using popular solutions like Jenkins/CircleCI login and then deploy...

What I did was install it in my dockerfile like this:
RUN curl -fsSLO https://get.docker.com/builds/Linux/x86_64/docker-17.04.0-ce.tgz \
&& tar xzvf docker-17.04.0-ce.tgz \
&& mv docker/docker /usr/local/bin \
&& rm -r docker docker-17.04.0-ce.tgz
Then in the args section for running the docker I added this:
args '--user root -v /var/run/docker.sock:/var/run/docker.sock'
For further explanation on why this works see: stackoverflow.com/q/27879713/354577 This works well for me though.

Related

Access files on host server from the Meteor App deployed with Meteor Up

I have a Meteor App deployed with Meteor UP to Ubuntu.
From this App I need to read a file which is located outside App container on the host server.
How can I do that?
I've tried to set up volumes in the mup.js but no luck. It seems that I'm missing how to correctly provide /host/path and /container/path
volumes: {
// passed as '-v /host/path:/container/path' to the docker run command
'/host/path': '/container/path',
'/second/host/path': '/second/container/path'
},
Read the docs for Docker mounting volumes but obviously can't understand it.
Let's say file is in /home/dirname/filename.csv.
How to correctly mount it into App to be able to access it from the Application?
Or maybe there are other possibilities to access it?
Welcome to Stack Overflow. Let me suggest another way of thinking about this...
In a scalable cluster, docker instances can be spun up and down as the load on the app changes. These may or may not be on the same host computer, so building a dependency on the file system of the host isn't a great idea.
You might be better to think of using a file storage mechanism such as S3, which will scale on its own, and disk storage limits won't apply.
Another option is to determine if the files could be stored in the database.
I hope that helps
Let's try to narrow the problem down.
Meteor UP is passing the configuration parameter volumes directly on to docker, as they also mention in the comment you included. It therefore might be easier to test it against docker directly - narrowing the components involved down as much as possible:
sudo docker run \
-it \
--rm \
-v "/host/path:/container/path" \
-v "/second/host/path:/second/container/path" \
busybox \
/bin/sh
Let me explain this:
sudo because Meteor UP uses sudo to start the container. See: https://github.com/zodern/meteor-up/blob/3c7120a75c12ea12fdd5688e33574c12e158fd07/src/plugins/meteor/assets/templates/start.sh#L63
docker run we want to start a container.
-it to access the container (think of it like SSH'ing into the container).
--rm to automatically clean up - remove the container - after we're done.
-v - here we give the volumes as you define it (I here took the two directories example you provided).
busybox - an image with some useful tools.
/bin/sh - the application to start the container with
I'd expect that you also cannot access the files here. In this case, dig deeper on why you can't make a folder accessible in Docker.
If you can, which would sound weird to me, you can start the container and try to access into the container by running the following command:
docker exec -it my-mup-container /bin/sh
You can think of this command like SSH'ing into a running container. Now you can check around if it really isn't there, if the credentials inside the container are correct, etc.
At last, I have to agree it #mikkel, that it's not a good option to mount a local directoy, but you can now start looking into how to use docker volume to mount a remote directory. He mentioned S3 by AWS, I've worked with AzureFiles on Azure, there are plenty of possibilities.

Installing and Running docker in a Docker container running in Openshift

I am currently working on the following scenario
I am trying to setup a container in OpenShift that runs a Jenkins that is itsself able to run docker to make use of declarative pipelines where the build is running in it's own docker container. This basically makes it necessary to install and run docker inside this container.
I have been working on it on quite some time now. Checked dozens of posts and threads online but I have not been able to accomplish it. Basically I got so far
I can install docker in my container (from the baseimage openshift/jenkins-2-centos7:latest)
I can't get docker to run as this makes use of systemctl which
Now I read that systemctl is not working inside docker containers or at least highly unrecommended as it interferes with the PID 1 in the system. Without
systemctl start docker
that will leave me with docker beeing unable to connect with the daemon (as expected) and the error message
Can't connect to docker daemon. Is 'docker -d' running on this host?
So I tried to set up the daemon myself using
the follwoing in my Dockerfile
RUN usermod -aG docker $(whoami)
RUN dockerd -H unix:///var/run/docker.sock
which will also not work telling me that cgroups cannot be mounted. After some more research I found that this could be handled with the cgroupfs-mount script from
https://github.com/tianon/cgroupfs-mount/tree/master
But also here I got no luck leaving me with the following error
Error starting daemon: Error initializing network controller: error obtaining controller instance: failed to create NAT chain DOCKER: iptables failed: iptables -t nat -N DOCKER: iptables v1.4.21: can't initialize iptables table `nat': Permission denied (you must be root)
Perhaps iptables or your kernel needs to be upgraded.
Now after hours I am out of ideas. Does anyone have an idea how to make docker work inside of OpenShift? Would be really greatful
I am trying to setup a container in OpenShift that runs a Jenkins that is itsself able to run docker to make use of declarative pipelines where the build is running in it's own docker container. This basically makes it necessary to install and run docker inside this container.
I don't think your conclusion here is the only possibility, and what I'll describe below is an easier approach to get what (I think) you want! :) If there are any other use cases that you have than these 3 I'll describe, let me know and I'll try to update to cover them:
Pipelines running in their own containers
Running additional containers from Pipelines
Building container images from Pipelines
Pipelines running in their own containers
For this case, there's the excellent Kubernetes plugin.
With this plugin, you add a Kubernetes/OpenShift cloud to the Jenkins global config. This can either be the one in which Jenkins is running (if you use the Jenkins image provided by OpenShift, this gets added by default at least), or an external cluster.
Inside that configuration, you can define PodTemplates (again, there are a couple of examples provided in the Jenkins image provided by OpenShift), or you can specify that in your pipeline directly also I think. When your pipeline requests a node/agent with a label that matches one of these (and there are no long-running agents that match), then a pod will be created from that template, and your pipeline execution will happen inside a container in that. Once it's no longer needed, it will be deprovisioned again.
Here are the pipeline steps exposed by this plugin: https://jenkins.io/doc/pipeline/steps/kubernetes/
Running additional containers from Pipelines
As part of your pipeline, you may want to run some tests, and those may expect to be able to interact with e.g. a database. You can create resources for that in your OpenShift project (e.g. a Deployment & expose it with a Service), and tear them down after. The openshift-client plugin is very useful here and has docs on how to interact with OpenShift.
Building container images from Pipelines
If your goal is to build container images from pipelines, remember that OpenShift also exposes this capability (depending on the security configuration) through Builds. Just like in the previous section, you can use the openshift-client plugin to create and trigger builds.
For more information on the Jenkins image that's maintained by OpenShift (and generally how to do useful things in Jenkins on OpenShift), there's this dedicated page in the OpenShift docs.
You have this article by #jpetazzo, from Docker team, about Docker In Docker (DinD):
article:
The primary purpose of Docker-in-Docker was to help with the development of Docker itself. Many people use it to run CI (e.g. with Jenkins), which seems fine at first, but they run into many “interesting” problems that can be avoided by bind-mounting the Docker socket into your Jenkins container instead.
DinD Repo:
This work is now obsolete, thanks to the combined efforts of some amazing people like #jfrazelle and #tianon, who also are black belts in the art of putting IKEA furniture together.
If you want to run Docker-in-Docker today, all you need to do is:
docker run --privileged -d docker:dind
So here is an article using another approach to build docker containers with Jenkins inside a docker container:
docker run -p 8080:8080 \
-v /var/run/docker.sock:/var/run/docker.sock \
--name jenkins \
jenkins/jenkins:lts
So you may want to adapt one of this solutions to your OpenShift scenario. I hope it solves your issue.
You'll need a privileged pod running jenkins wich mounts the openshift node docker socket. This is a bad idea as jenkins'll launch container outside kubernetes semantics and control.
Why do not use s2i service shipped with openshift ?
Regards.

New to Docker - how to essentially make a cloneable setup?

My goal is to use Docker to create a mail setup running postfix + dovecot, fully configured and ready to go (on Ubuntu 14.04), so I could easily deploy on several servers. As far as I understand Docker, the process to do this is:
Spin up a new container (docker run -it ubuntu bash).
Install and configure postfix and dovecot.
If I need to shut down and take a break, I can exit the shell and return to the container via docker start <id> followed by docker attach <id>.
(here's where things get fuzzy for me)
At this point, is it better to export the image to a file, import on another server, and run it? How do I make sure the container will automatically start postfix, dovecot, and other services upon running it? I also don't quite understand the difference between using a Dockerfile to automate installations vs just installing it manually and exporting the image.
Configure multiple docker images using Dockerfiles
Each docker container should run only one service. So one container for postfix, one for another service etc. You can have your running containers communicate with each other
Build those images
Push those images to a registry so that you can easily pull them on different servers and have the same setup.
Pull those images on your different servers.
You can pass ENV variables when you start a container to configure it.
You should not install something directly inside a running container.
This defeat the pupose of having a reproducible setup with Docker.
Your step #2 should be a RUN entry inside a Dockerfile, that is then used to run docker build to create an image.
This image could then be used to start and stop running containers as needed.
See the Dockerfile RUN entry documentation. This is usually used with apt-get install to install needed components.
The ENTRYPOINT in the Dockerfile should be set to start your services.
In general it is recommended to have just one process in each image.

is it possible to wrap an entire ubuntu 14 os in a docker image

I have a Ubuntu 14 desktop, on which I do some of my development work.
This work mainly revolves around Django & Flask development using PyCharm
I was wandering if it was possible to wrap the entire OS file system in a Docker container, so my whole development environment, including PyCharm and any other tools, would become portable
Yes, this is where Docker shines. Once you install Docker you can run:
docker run --name my-dev -it ubuntu:14.04 /bin/bash
and this will put you, as root, inside a Docker container's bash prompt. It is for all intents and purposes the entire os without anything extra, you will need to install the extras, like pycharm, flask, django, etc. Your entire environment. The environment you start with has nothing, so you will have to add things like pip (apt-get install -y python-pip), and other goodies. Once you have your entire environment you can exit (with exit, or ^D) and you will be back in your host operating system. Then you can commit :
docker commit -m 'this is my development image' my-dev my-dev
This takes the Docker image you just ran (and updated with changes) and saves it on your machine with the tag my-dev:v1, any time in the future you can run this again using the invocation:
docker run -it my-dev /bin/bash
Building a Docker image like this is harder, it is easier once you learn how to make a Dockerfile that describes the base image (ubuntu:14.04) and all of the modifications you want to make to it in a file called Dockerfile. I have an example of a Dockerfile here:
https://github.com/tacodata/pythondev
This builds my python development environment, including git, ssh keys, compilers, etc. It does have my name hardcoded in it, so, it won't help you much doing development (I need to fix that). Anyway, you can download the Dockerfile, change it with your details in it, and create your own image like this:
docker build -t my-dev -< Dockerfile
There are hundreds of examples on the Docker hub which is where I started with mine.
-g

Docker rails app and git

Lets say I have a container that is fully equipped to serve a Rails app with Passenger and Apache, and I have a vhost that routes to /var/www/app/public in my container. Since a container is supposed to be sort of like a process, what would I do when my Rails code changes? If the app was cloned with Git, and there are pending changes in the repo, how can the container pull in these changes automatically?
You have a choice on how you want to structure your container, depending on your deployment philosophy:
Minimal: You install all your rails pre-reqs in the Docker file (RUN commands), but have the ENTRYPOINT be something like "git pull && bundle install --deployment && rails run". At container boot time it will get your latest code.
Snapshot: Same as above, but have the ENTRYPOINT also be a RUN command. This way, the container has a pre-installed snapshot of the code, but it will still update when the container is booted. Sometimes this can speed up boot time (i.e. if most of the gems are already installed).
Container as Deployment: Same as above, but change the ENTRYPOINT to be "rails run" only. This way, your container is your code. You'll have to make new containers every time you change rails (automation!). The advantage is that your container won't need to contact your code repo at all. The downside is that you have to always remember what the latest container is. (Tags can help) And right now, Docker doesn't have a good story on cleaning up old containers.
In this scenario, it sounds like you have built an image and are now running this image in a container.
Using the image your running container originates from, you could add another build step to git pull your most up to date code. I'd consider this an incremental update as your building upon a preexisting image. I'd recommend tagging and pushing to your (assuming your using a private index) appropriately. The new image would be available to run.
Depending on the need, you could also rebuild the base image of your software. I'm assuming your using a Dockerfile to build your original image which includes a git checkout of your software. You could then tag and push to your index for use appropriately.
In docker v0.8, It will be possible to start a new command in a running container, so you will be able to do what you want.
In the meantime, one solution would consist in using volumes.
Option 1: Docker managed volumes
FROM ubuntu
...
VOLUME ["/var/www/app/public"]
ADD host/src/path /var/www/app/public
CMD start rails
Start and run your container, then when you need to git pull, you can simply:
$ docker ps # -> retrieve the id of the running container
$ docker run -volumes-from <container id> <your image with git installed> sh -c 'cd/var/www/app/public && git pull -u'
This will result in your first running container to have the sources updated.
Option 2: Host volumes
You can start your container with:
$ docker run -v `pwd`/srcs:/var/www/app/public <yourimage>
and then simply git pull in your host's sources directory, it will update the container's sources.

Resources