Equivalent of docker-compose.yml parameters in Docker Cloud - docker

I started playing with Docker Cloud and am trying to deploy a tomav/docker-mailserver container to an EC2 instance. The EC2 and dockercloud-agent seems to work fine for container deployment.
The docker-compose.yml uses hostname and domainname parameters which are required to properly configure it, but I can't find their equivalent in Docker Cloud's interface.
One of them is using the container auto-generated name, which I need to override.
Anybody knows if I am missing something ? Or is it not possible yet ?
Thank you for your help !

What you want is a stack file, roughly equivalent to docker-compose

Related

Access docker-compose api from outside host

I want to deploy an application using docker-compose inside an EC2 host.
For reasons beyond the scope of this question, one of the services will use a constant docker tag, as in myrepo/myimage:stable.
Periodically, the image will be updated (same tag, different hash) so I will need to run docker-compose pull && docker-compose up -d.
My question is whether there is a way of exposing docker-compose's API so that this can be invoked using an api call to the EC2 instance so as to avoid having to ssh into the machine.
Compose doesn't have an API per se, it is just a local command-line tool. You need to use something like ssh, or a generic system-automation tool like Ansible or Salt Stack, to invoke it.
Amazon's hosted container-cluster systems do have network-accessible APIs. If you use EKS, you can use the Kubernetes API to update a Deployment spec's image:. Amazon's proprietary ECS system has a different API, but again you can use it to remotely update the image name without having direct access to the underlying node(s).
In all cases you will be better off if you can use a unique tag per build. In a Compose setup you could supply this via an environment variable
image: myrepo/myimage:${TAG:-stable}
and then deploy it with
ssh root#remote-host TAG=20210414 docker-compose up -d
Since each build would have a distinct tag/name, you don't need to explicitly docker-compose pull; Docker will know to pull an image that it doesn't already have locally.
In a Kubernetes/EKS context in particular, it's important that the image: value changes to force an update (or downgrade!); if you tell Kubernetes that you want to run a Pod with the stable tag, and it already has one, it won't change anything.

docker.sock bind mount AWS ECS Fargate

What I want to accomplish is the equivalent of :
docker run -v /var/run/docker.sock:/var/run/docker sock <image>
EDITED:
I followed this: Does ECS task definition support volume mapping syntax?
But then it is unable to save because this type of bind mount is not available for fargate.
Is there another way to accomplish it for fargate?
You are trying to access the Docker socket from within a container managed by AWS. I guess this is not available. Instead, you need to make API calls to AWS to launch new containers. Maybe something like this? Or probably rethink the whole thing.

Google Cloud Composer - Deploying Docker Image

Definitely missing something, and could use some quick assistance!
Simply, how do you deploy a Docker image to an Airflow DAG for running jobs? Does anyone have a simple example of deploying a Google container and running it via Airflow/Composer?
You can use the Docker Operator, included in the core Airflow repository.
If pulling an image from a private registry, you'll need to set a connection config with the relevant credentials and pass it to the docker_conn_id param.

Docker backup container with startup parameters

Im facing the same problem since months now and i dont have an adequate solution.
Im running several Containers based on different images. Some of them were started using portainer with some arguments and volumes. Some of them were started using the CLI and docker start with some arguments and parameters.
Now all these settings are stored somewhere. Because if i stop and retart such a container, everything works well again. but, if i do a commit, backup it with tar and load it on a different system and do a docker start, it has lost all of its settings.
The procedure as described here: https://linuxconfig.org/docker-container-backup-and-recovery does not work in my case.
Now im thinking about to write an own web application which will create me some docker compose files based on my setting rather than to just do a docker start with the correct params. This web application should also take care of the volumes (just folders) and do a incremental backup of them with borg to a remote server.
But actually this is only an idea. Is there a way to "extract" a docker compose file of a running containter? So that i can redeploy a container 1:1 to an other server and just have to run docker run mycontainer and it will have the same settings?
Or do i have to write my web app? Or have i missed some page on google and there is already such a solution?
Thank you!
To see the current configuration of a container, you can use:
docker container inspect $container_id
You can then use those configurations to run your container on another machine. There is no easy import/export of these settings to start another container that I'm aware of.
Most people use a docker-compose.yml to define how they want a container run. They also build images with a Dockerfile and transfer them with a registry server rather than a save/load.
The docker-compose.yml can be used with docker-compose or docker stack deploy and allows the configuration of the container to be documented as a configuration file that is tracked in version control, rather than error prone user entered settings. Running containers by hand or starting them with a GUI is useful for a quick test or debugging, but not for reproducibility.
You would like to backup the instance but the commands you're providing are to backup the image. I'd suggest to update your Dockerfile to solve the issue. In case you really want to go down the saving the instance current status, you should use the docker export and docker import commands.
Reference:
https://docs.docker.com/engine/reference/commandline/import/
https://docs.docker.com/engine/reference/commandline/export/
NOTE: the docker export does not export the content of the volumes anyway, I suggest you to should refer to https://docs.docker.com/engine/admin/volumes/volumes/

Deploy Rails app using capstrano from inside a docker container

I managed to dockerize my rails application for development and it works great. Before this I had a deploy setup using Capistrano. Now I would like to try and deploy using the same Capistrano but executed from within the docker container. My question is can i use the same ssh key from my host machine or should I generate a new key inside the container? The last option does not sound good to me since it would have to be recreated when the container gets destroyed. I am aware that in the long run I would probably be better off setting the production server to run docker and install through docker machine but so far I just like to keep the setup I have already on production.
Anyone else have tried this?
I think you should link your devices ssh key into the container (as long as the container is not accessible from network). Additionally to your argument, you can more easily share your image with others as they are able to just link their key themselves.
You can mount your SSH key into the container at runtime.
docker run -v /path/to/host/ssh-key:/path/to/container/ssh-key <image> <command>
ssh-key will be available in the container at /path/to/container/ssh-key

Resources