How can I share my full app (.yml file) with others? - docker

I created an app which consists of many components so I use docker-compose.
I published all my images into my private repository (but I also use public repos from other providers).
If I have many customers: how can they receive my full app?
I could send them my docker-compose.yml file per email or if I have access to the servers, I can scp the .yml file.
But is there another solution to provide my full app without scp'ing a yml file?
Edit:
So I just read about docker-machine. This looks good, and I already linked it with an Azure subscription.
Now what's the easiest way to deploy a new VM with my docker-application? Do I still have to scp my .yml file, ssh into this machine and start docker-compose? Or can I tell to use a specific .yml during VM creation and automatically run it?

There is no official distribution system specifically for Compose files, but there are many options.
The easiest option would be to host the Compose file from a website. You could even use github or github pages. Once you have it hosted by an http server you can curl it to download it.
There is also:
composehub a community project to act as a package manager for Compose files
Some related issues: #1597, #3098, #1818
The experimental DAB feature in Docker

Related

Automatically deploying docker-compose on DigitalOcean via Github

I am a newbie when it comes to docker.
I have a web app that contains 4 services. I manage to create a docker-compose for it.
I would like now to publish it.
My plan is to
upload the whole repository with the compose file and the source codes to a private repository in github.
then create a droplet in digital ocean
I would like to be able to publish the code easily through github only. that it will be automatically uploaded to the server and restart the required services.
what would be the best approach?
Yes, there is. There is an App platefrom in Digitalocean. Once you use it to deploy your docker image and whenever you update docker image via github, your site will be rebuilding (ci/cd).
I hope this can be help for you.

How to publish docker-compose.yml itself?

I'd like to make sure that our frontend developers have access to the latest versions of the backend web app and can update it whenever desired, so to avoid incompatibilities with the API, which is also under development.
I have created a docker-compose.yml file containing two services: one for the backend web application, built with a custom Dockerfile, and a generic postgres image for the database. It all works fine.
I already published the backend webapp image to my private docker registry powered by Nexus repository manager, using the docker-compose push command.
Now I would like to somehow make my docker-compose.yml available, so that all that frontend devs need to do is run it with simple command.
Is there a way to publish docker-compose.yml to a Docker registry so I can avoid sharing backend sources with the frontend devs?
The traditional solution for sharing docker-compose.yml files has been version control (e.g. GitHub).
Recently, Docker has been working on docker-app which allows you to share docker-compose.yml files using a registry server. This is a separate install, and currently experimental, so I wouldn't base a production environment on it, but may be useful for developers to try out. You can checkout the project here:
https://github.com/docker/app

Where are you supposed to store your docker config files?

I'm new to docker so I have a very simple question: Where do you put your config files?
Say you want to install mongodb. You install it but then you need to create/edit a file. I don't think they fit on github since they're used for deployment though it's not a bad place to store the files.
I was just wondering if docker had any support for storing such config files so you can add them as part of running an image.
Do you have to use swarms?
Typically you'll store the configuration files on the Docker host and then use volumes to bind mount your configuration files in the container. This allows you to separately manage the configuration file from the running containers. When you make a change to the configuration, you can just restart the container.
You can then use a configuration management tool like Salt, Puppet, or Chef to manage copying/storing the configuration file onto the Docker host. Things like passwords can be managed by the secrets capabilities of the tool. When set up this way, changing a configuration file just means you need to restart your container and not build a new image.
Yes, in most cases you definitely want to keep your Dockerfiles in version control. If your org (or you personally) use GitHub for this, that's fine, but stick them wherever your other repos are. One of the main ideas in DevOps is to treat infrastructure as code. In fact, one of the main benefits of something like a Dockerfile (or a chef cookbook, or a puppet file, etc) is that it is "used for deployment" but can also be version-controlled, meaningfully diffed, etc.

How do I finalize my Docker setup and how is it actually called?

I am pretty new to Docker. After reading specifically what I needed I figured out how to create a pretty nice Docker setup. I have created some setup where in I can start up multiple systems using one docker-compose.yml file.
I am currently using this for testing specific PHP code on different PHP and MySQL versions. The file structure looks something like this:
./mysql55/Dockerfile
./mysql56/Dockerfile
./mysql57/Dockerfile
./php53/Dockerfile
./php54/Dockerfile
./php56/Dockerfile
./php70/Dockerfile
./php71/Dockerfile
./php72/Dockerfile
./web (shared folder with test files available on all php machines)
./master_web (web interface to send test request to all possible versions using one call)
./docker-compose.yml
In the docker-compose file I setup different containers most refering to the local Dockerfiles, some refering to online image names. When I run docker-compose up all containers start as expected in the configured network configuration and I'm able to use it as desired.
I would first of all like to know how this setup is called. Is this called a "docker swarn" or is such setup called differently?
Secondly, I'd like to make one "compiled/combined" file (image, container, swarn, engine, machine) or however it is called) of this. That I can save without having to depend on external sources again. Of course the docker-compose.yml file will work as long as all the refered external sources are still available. But I'd like to pusblish my fully confired setup as is. How do I do that?
You can publish built images with Docker registry. You can setup your own or use third-party service.
After that, you need to prefix you image names with your registry IP/DNS in docker-compose.yml. This way, you can deploy it anywhere docker-compose is installed (and docker-compose itself can be run as docker container too), just need to copy your docker-compose.yml file there.
docker-machine is tool to deploy to multiple machines, as is docker swarm.

Graphite installation in a docker container - volume query

I am installing graphite via a docker container.
I have seen that whisper files should not be saved in the container.
So I will be using a data volume from docker to save these on the host machine.
My question is is there anything else I should be saving on the host (I know this is subjective so I guess Im looking for recommendations on whats important)?
Don't believe I need configuration e.g. carbon conf as this will come from my installation
So I'm thinking are there any other files from graphite I need (e.g log files etc)?
What is your reason for keeping log files? Though you do need the directory structure in place. Logging defaults to /opt/graphite/storage/logs. In here you have carbon-cache/ and webapp/ directories. The log directory for the webapp is set in the config- local_settings.py, whereas carbon uses carbon.conf. The configs are well documented so you can look into them for specific information.
Apart from configs that are generated during installation the only other 'file' crucial for the webapp to work is graphite.db in the /opt/graphtie/storage. It is used internally by the django webapp for housekeeping information such as user-auth etc. It gets generated by python manage.py --syncdb so i believe you can generate it again at the target system.

Resources