It seems that docker compose * mirrors many docker-compose * commands.
What is the difference between these two?
Edit: docker compose is not considered a tech preview anymore. The rest of the answer still stands as-is, including the not-yet-implemented command.
docker compose is currently a tech preview, but is meant to be a drop-in replacement for docker-compose. It is being built into the docker binary to allow for new features. It hasn't implemented one command yet, and has deprecated a few. These are quite rarely used though, in my experience.
The goal is that docker compose will eventually replace docker-compose, but no timeline for that yet and until that day you still need docker-compose for production.
Why they do that?
docker-compose is written in Python while most Docker developments are in Go. And they decided to recreate the project in Go with the same and more features, for better integration
Related
I have a question regarding docker.
I was preparing devops infrastructure using docker compose.
I needed to create several environments (with different yaml configurations).
I have found some things a little bit cumbersome to do using docker-compose commands only.
I found Docker SDK for different languages here, it seems it fits my needs perfectly.
However, I have not seen such setup yet.
My question: are there any drawbacks with using mentioned docker api for organizing services ?
Thanks a lot.
P.S.: I have searched for such SDK for docker-compose. It does not exist, since docker-compose is only command line utility.
I have been working on a project where I have had several docker containers:
Three OSRM routing servers
Nominatim server
Container where the webpage code is with all the needed dependencies
So, now I want to prepare a version that a user could download and run. What is the best practice to do such a thing?
Firstly, I thought maybe to join everything into one container, but I have read that it is not recommended to have several processes in one place. Secondly, I thought about wrapping up everything into a VM, but that is not really a "program" that a user can launch. And my third idea was to maybe, write a script, that would download each container from Docker Hub separately and launch the webpage. But, I am not sure if that is best practice, or maybe there are some better ideas.
When you need to deploy a full project composed of several containers.
You may use a specialized tool.
A well known for mono-server usage is docker-compose:
Compose is a tool for defining and running multi-container Docker applications
https://docs.docker.com/compose/
You could provide to your users :
docker-compose file
your application docker images (ex: through docker hub).
Regarding clusters/cloud, we talk more about orchestrator like docker swarm, Kubernetes, nomad
Kubernetes's documentation is the following:
https://kubernetes.io/
I'm currently into Docker and I'm asking myself why containers in general weren't hyped before Docker? I mean it's not like containers were something new. The technology has been around for quite some time now. But Docker gained like it's success overnight.
Is there something I didn't keep in mind?
It's a very broad question but I will try to answer you.
Docker was at first build on LXC, they switched to libcontainer later.
LXC is actually pretty hard to use compared to Docker, you don't have all the Docker related stuff like Dockerfile, Compose and all.
So I would say that container wasn't really a thing because of the difficulty of LXC.
As Wassim, I would say the main reason was that it needed motivated sysadmins, specific kernels (with OpenVZ and AUFS),...
Creating the same thing as a docker image was a complicated process.
Today it is a straightforward process, create a Dockerfile, just do
docker build -t mytag .
and you have created an image.
In 2004, you could not do that so easily.
I like the idea of modularizing an applications into containers (db, fronted, backed...) However, according to Docker docs "Compose is great for development, testing, and staging environments".
The sentence tells nothing about production environment. Thus, I am confused here.
Is it better to use Dockerfile to build production image from scratch and install all LAMP stack (etc.) there?
Or is it better to build production environment with docker-compose.yml? Is there any reason (overhead, linking etc.) that Docker doesn't explicitly say that Compose is great for production?
Really you need to define "production" in your case.
Compose simply starts and stops multiple containers with a single command. It doesn't add anything to the mix you couldn't do with regular docker commands.
If "production" is a single docker host, with all instances and relationships defined, then compose can do that.
But if instead you want multiple hosts and dynamic scaling across the cluster then you are really looking at swarm or another option.
Just to extend what #ChrisSainty already mentioned, compose is just an orchestration tool, you can use your own images built with your own Dockerfiles with your compose settings in a single host. But note that it is possible to compose against a swarm cluster as it exposes the same API as a single Docker host.
In my opinion it is an easy way to implement a microservice architecture using containers to tailor services with high efficient availability. In addition to that I recommend checking this official documentation on good practices on using compose in production environments.
I created a docker image with pre-installed packages in it (apache, mysql, memcached, solr, etc). Now I want to run a command in a container made from this image, and this command relies on all my packages. I want to have all of them started when I start a new container.
I tried to use /sbin/init, but it doesn't work in docker.
The general opinion is to use a process manager to do this. I wont go into the details here, since I wrote a blog on that: http://blog.trifork.com/2014/03/11/using-supervisor-with-docker-to-manage-processes-supporting-image-inheritance/
Note that another rather general opinion is to split your containers. MySQL generally is on a different container, but you can try to get that to work later on as well of course :)
I see that this is an old topic, however, for someone who just came across it - docker-compose can be used to connect multiple containers, so most of the processes can be split up in different containers. Furthermore, as mentioned earlier, different process managers can be used in order to run processes simultaneously and the one that I would like to mention is Chaperone. I find it really easy to use and slightly better than supervisor!
docker compose and docker sync -> You can not go wrong applying this concept.
-Glynn