Deploy Rails app using capstrano from inside a docker container - ruby-on-rails

I managed to dockerize my rails application for development and it works great. Before this I had a deploy setup using Capistrano. Now I would like to try and deploy using the same Capistrano but executed from within the docker container. My question is can i use the same ssh key from my host machine or should I generate a new key inside the container? The last option does not sound good to me since it would have to be recreated when the container gets destroyed. I am aware that in the long run I would probably be better off setting the production server to run docker and install through docker machine but so far I just like to keep the setup I have already on production.
Anyone else have tried this?

I think you should link your devices ssh key into the container (as long as the container is not accessible from network). Additionally to your argument, you can more easily share your image with others as they are able to just link their key themselves.

You can mount your SSH key into the container at runtime.
docker run -v /path/to/host/ssh-key:/path/to/container/ssh-key <image> <command>
ssh-key will be available in the container at /path/to/container/ssh-key

Related

How do I run Docker in Docker on Heroku?

Why?
I'm trying to create a general purpose solution for running docker-compose on Heroku. I want to make a one click deployment solution through the use of Heroku Button deployment. This way, a user does not need any knowledge of git, Heroku cli and docker.
The problem.
Docker and the docker daemon are only available when I set the stack to container. There are buildpacks that give you docker and docker-compose CLI but without the docker daemon you cannot run the docker image. So buildpacks won't work.
With the stack set to container I can use the file heroku.yml (article). In there I define my processes. (It replaces Procfile. If I still add a Procfile to my project it will do nothing.)
I can also define a Dockerfile there to build my docker image.
When I however run the docker image the following error pops up:
2019-02-28T15:32:48.462101+00:00 app[worker.1]: Couldn't connect to Docker daemon at http+docker://localhost - is it running?
2019-02-28T15:32:48.462119+00:00 app[worker.1]:
2019-02-28T15:32:48.462122+00:00 app[worker.1]: If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable.
The problem is inside the Docker container the Docker daemon is not running. The solution to this is to mount it:
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
And since you cannot use Procfile I cannot run that command. (See above heroku.yml replaces Procfile.) And if I was using a buildpack I could use Procfile but the docker daemon wouldn't be running.....
I tried defining a VOLUME within the Dockerfile and the problem persists. Furthermore a heroku article says "Volume mounting is not supported. The filesystem of the dyno is ephemeral."
On Heroku it is possible to run a docker image. What I am struggling at is running a docker in docker image.
Running a docker in docker image works fine on my VPS by mounting /var/run/docker.sock but this cannot(?) be done on Heroku.
Last words:
I'm trying to make this work so that other people can easily deploy software solution even though they are not comfortable with git, heroku cli and docker.
Unfortunately the answer to your question is: not yet.
For securiy reasons Heroku does not provide to the users the ability to run priviledged containers because the container could access to host capabilities.
The documentation is pretty clear about your limitations, e.g: No --priviledged container and no root user either, no VOLUMES and disk is ephemeral.
After playing with DinD images for your concern, I came to the conclusion that trying to run Docker inside a Heroku container is not the right choice and design.
I am pretty sure what you are trying to achieve is close to what Heroku is offering to the users. Offering a platform or an application where non-developper can push and deploy applications with just a button can be very interesting in various ways. And it can be done with an application using their Platform API.
In this situation a Web application (running on Heroku) may not (up to my knowledge) be able to do what you want. Instead you need to embed in a Desktop application: git, docker, and your app for parsing, verifying, building and pushing your applications/components to Heroku's container registry.
In the end, if you still think what you need a DinD solution, well, your primary solution to use a VPS is the only solution for the moment. But be aware that it may open security vulnerabilities to your system and you may arrive to offer something similar to Heroku's offer when trying to limit those security doors.
I don't think you can run a service on Heroku that can use the docker command to start some docker container.
I want to make a one click deployment solution through the use of Heroku Button deployment.
I think you can update the reference of the Deploy button to some of your automation servers (ex: an instance Jenkins already deploy on Heroku/another cloud) to trigger the deploy pipeline and don't let people interact with git/docker, etc.
But yes, you have to deal with a lot of problems like security, parameter. when not using popular solutions like Jenkins/CircleCI login and then deploy...
What I did was install it in my dockerfile like this:
RUN curl -fsSLO https://get.docker.com/builds/Linux/x86_64/docker-17.04.0-ce.tgz \
&& tar xzvf docker-17.04.0-ce.tgz \
&& mv docker/docker /usr/local/bin \
&& rm -r docker docker-17.04.0-ce.tgz
Then in the args section for running the docker I added this:
args '--user root -v /var/run/docker.sock:/var/run/docker.sock'
For further explanation on why this works see: stackoverflow.com/q/27879713/354577 This works well for me though.

how to configure docker containers proxy?

how to configure docker containers proxy ?
First of all,
I tried to use the way that setted '/etc/systemd/system/docker.service.d/http-proxy.conf' (https://docs.docker.com/config/daemon/systemd/#httphttps-proxy) and it really works for docker daemon, but it doesn't work for docker containers, it seems this way just take effect for some command like 'docker pull'
Secondary,
I have a lot of docker containers, I don't want to use 'docker run -e http_proxy=xxx... ' command every time when I start a container.
So I guess if there is such a way automatically load the global configuration file when the container starts, I googled it and got it to set the file '~/.docker/config.json'(How to configure docker container proxy?, this way still does not work for me.
(
my host machine system is centos7, here is my docker -v:
Docker version 1.13.1, build 6e3bb8e/1.13.1
)
I feel that it may be related to my docker version or the docker started by the systemd service, so ~/.docker/config.json does not take effect.
Finally ,
I just hope that modifying configuration files will allow all my containers to automatically configure environment variables when it start (that is auto set environment variables 'http_proxy=http://HostIP:8118 https_proxy=http://HostIP:8118' when a container start, like Dockerfile param ENV) . I want to know if there is such a way? And if this way can be realised I can make the container use the host's proxy, after all, my host's agent is working properly.
But I was wrong, I tried to run a container,then set http_proxy=http://HostIP:8118 and https_proxy=http://HostIP:8118, but when I use the command 'wget facebook.com' and I got 'Connecting to HostIP:8118... failed: No route to host.', But, the host machine(centos7) can successfully execute the wget, And I can successfully ping the host in the container. I don't know why it might be related to firewalls and the 8118 port.
It is Over,
OMG.. I have no other way, can anyone help me?
==============================
ps:
You can see from the screenshot below, I actually want to install goa and goagen but report an error, maybe because of network reasons, I want to open the agent to try, so...only have the above problem.
1.my go docker container
enter image description here
go docker wget
2.my host
my host wget
You need version 17.07 or more recent to automatically pass the proxy to containers you start using the config.json file. The 1.13 releases are long out of support.
This is well documented from docker:
https://docs.docker.com/network/proxy/

How to "docker push" to dynamic insecure registries?

OS: Amazon Linux (hosted on AWS)
Docker version: 17.x
Tools: Ansible, Docker
Our developers use Ansible to be able to spin up individual AWS spot environments that get populated with docker images that get built on their local machines, pushed into a docker registry created on the AWS spot machine, then pulled down and run.
When the devs do this locally on their Macbooks, ansible will orchestrate building the code with sbt, spin up an AWS spot instance, run a docker registry, push the image into the docker registry, command the instance to pull down the image and run it, run a testsuite, etc.
To make things better and easier for non-devs to be able to run individual test environments, we put the ansible script behind Jenkins and use their username to let ansible create a domain name in Route53 that points to their temporary spot instance environment.
This all works great without the registry -- i.e. using JFrog Artifactory to have these dynamic envs just pull pre-built images. It lets QA team members spin up any version of the env they want. But now to allow it to build code and push, I need to have an insecure registry and that is where things fell apart...
Since any user can run this, the Route53 domain name is dynamic. That means I cannot just hardcode in daemon.json the --insecure-registry entry. I have tried to find a way to set a wildcard registry but it didnt seem to work for me. Also since this is a shared build server (the one that is running the ansible commands) so I dont want to keep adding entries and restarting docker because other things might be running.
So, to summarize the questions:
Is there a way to use a wildcard for the insecure-registry entry?
How can I get docker to recognize insecure-registry entry without restarting docker daemon?
So far I've found this solution to satisfy my needs, but not 100% happy yet. I'll work on it more. It doesn't handle the first case of a wildcard, but it does seem to work for the 2nd question about reloading without restart.
First problem is I was editing the wrong file. It doesn't respect /etc/sysconfig/docker nor does it respect $HOME/.docker/daemon.json. The only file that works on Amazon Linux for me is /etc/docker/daemon.json so I manually edited it and then tested a reload and verified with docker info. I'll work on this more to programmatically be able to insert entries as needed, but the manual test works:
sudo vim /etc/docker/daemon.json
sudo systemctl reload docker.service
docker info

Docker backup container with startup parameters

Im facing the same problem since months now and i dont have an adequate solution.
Im running several Containers based on different images. Some of them were started using portainer with some arguments and volumes. Some of them were started using the CLI and docker start with some arguments and parameters.
Now all these settings are stored somewhere. Because if i stop and retart such a container, everything works well again. but, if i do a commit, backup it with tar and load it on a different system and do a docker start, it has lost all of its settings.
The procedure as described here: https://linuxconfig.org/docker-container-backup-and-recovery does not work in my case.
Now im thinking about to write an own web application which will create me some docker compose files based on my setting rather than to just do a docker start with the correct params. This web application should also take care of the volumes (just folders) and do a incremental backup of them with borg to a remote server.
But actually this is only an idea. Is there a way to "extract" a docker compose file of a running containter? So that i can redeploy a container 1:1 to an other server and just have to run docker run mycontainer and it will have the same settings?
Or do i have to write my web app? Or have i missed some page on google and there is already such a solution?
Thank you!
To see the current configuration of a container, you can use:
docker container inspect $container_id
You can then use those configurations to run your container on another machine. There is no easy import/export of these settings to start another container that I'm aware of.
Most people use a docker-compose.yml to define how they want a container run. They also build images with a Dockerfile and transfer them with a registry server rather than a save/load.
The docker-compose.yml can be used with docker-compose or docker stack deploy and allows the configuration of the container to be documented as a configuration file that is tracked in version control, rather than error prone user entered settings. Running containers by hand or starting them with a GUI is useful for a quick test or debugging, but not for reproducibility.
You would like to backup the instance but the commands you're providing are to backup the image. I'd suggest to update your Dockerfile to solve the issue. In case you really want to go down the saving the instance current status, you should use the docker export and docker import commands.
Reference:
https://docs.docker.com/engine/reference/commandline/import/
https://docs.docker.com/engine/reference/commandline/export/
NOTE: the docker export does not export the content of the volumes anyway, I suggest you to should refer to https://docs.docker.com/engine/admin/volumes/volumes/

Docker - running commands from all containers

I'm using docker compose to create basic environment for my websites (at the moment only locally so I don't care about security issues). At the moment I'm using 3 different containers"
for nginx
for php
for mysql
I can obviously log in to any container to run commands. For example I can ssh to php container to verify PHP version or run PHP script but the question is - is it possible to have such configuration that I could run commands from all containers running for example one SSH container?
For example I would like to run commands like this:
php -v
nginx restart
mysql
after logging to one common SSH for all services.
Is it possible at all? I know there is exec command so I could add before each command name of container but it won't be flexible enough to use and in case of more containers it would be more and more difficult.
So the question is - is it possible at all and if yes, how could it be achieved?
Your question was:
Is it possible at all?
and the answer is:
No
This is due to the two restrictions you are giving in combination. Your first restrictions is:
Use SSH not Exec
It is definitly possible to have an SSH daemon running in each container and setup the security so that you can run ssh commands in e.g. a passwordless mode
see e.g. Passwordless SSH login
Your second restriction is:
one common SSH for all services
and this would now be the tricky part. You'd have to:
create one common ssh server in e.g. one special container for this purpose or using one of the containers
create communication to or between containers
make sure that the ssh server knows which command is for which container
All in all this would be so complicated in comparison to a simple bash or python script that can do the same with exec commands that in all the "no" is IMHO a better answer than trying to solve the academic problem of "might there be some tricky/fancy solution of doing this".

Resources