I have a question regarding docker.
I was preparing devops infrastructure using docker compose.
I needed to create several environments (with different yaml configurations).
I have found some things a little bit cumbersome to do using docker-compose commands only.
I found Docker SDK for different languages here, it seems it fits my needs perfectly.
However, I have not seen such setup yet.
My question: are there any drawbacks with using mentioned docker api for organizing services ?
Thanks a lot.
P.S.: I have searched for such SDK for docker-compose. It does not exist, since docker-compose is only command line utility.
Related
I have been working on a project where I have had several docker containers:
Three OSRM routing servers
Nominatim server
Container where the webpage code is with all the needed dependencies
So, now I want to prepare a version that a user could download and run. What is the best practice to do such a thing?
Firstly, I thought maybe to join everything into one container, but I have read that it is not recommended to have several processes in one place. Secondly, I thought about wrapping up everything into a VM, but that is not really a "program" that a user can launch. And my third idea was to maybe, write a script, that would download each container from Docker Hub separately and launch the webpage. But, I am not sure if that is best practice, or maybe there are some better ideas.
When you need to deploy a full project composed of several containers.
You may use a specialized tool.
A well known for mono-server usage is docker-compose:
Compose is a tool for defining and running multi-container Docker applications
https://docs.docker.com/compose/
You could provide to your users :
docker-compose file
your application docker images (ex: through docker hub).
Regarding clusters/cloud, we talk more about orchestrator like docker swarm, Kubernetes, nomad
Kubernetes's documentation is the following:
https://kubernetes.io/
It seems that docker compose * mirrors many docker-compose * commands.
What is the difference between these two?
Edit: docker compose is not considered a tech preview anymore. The rest of the answer still stands as-is, including the not-yet-implemented command.
docker compose is currently a tech preview, but is meant to be a drop-in replacement for docker-compose. It is being built into the docker binary to allow for new features. It hasn't implemented one command yet, and has deprecated a few. These are quite rarely used though, in my experience.
The goal is that docker compose will eventually replace docker-compose, but no timeline for that yet and until that day you still need docker-compose for production.
Why they do that?
docker-compose is written in Python while most Docker developments are in Go. And they decided to recreate the project in Go with the same and more features, for better integration
I am building an app that has a couple of microservices and trying to prototype a CI/CD pipeline using Codeship and Docker.
I am a bit confused with the difference between using codeship-services.yml and docker-compose.yml. Codeship docs say -
By default, we look for the filename codeship-services.yml. In its
absence, Codeship will automatically search for a docker-compose.yml
file to use in its place.
Per my understanding, docker-compose could be more appropriate in my case as I'd like to spin up containers for all the microservices at the same time for integration testing. codeship-services.yml would have helped if I wanted to build my services serially rather than in parallel.
Is my understanding correct?
You can use the codeship-services.yml in the same manner as the docker-compose.yml. So you can define your services and spin up several containers via the link key.
I do exactly the same in my codeship-services.yml. I do some testing on my frontend service and that service spins up all depended services (backend, DB, etc.) when I run it via the codeship-steps.yml, just like in docker-compose.yml.
At the beginning it was a bit confusing for me to have 2 files which are nearly the same. I actually contacted the Codeship support with that question and the answer was that it could be the same file (because all unavailable features in the compose file are just ignored, see here) but in almost all cases they have seen it was easier to have two separate files at the end, one for CI/CD and one for running docker-compose.
And the same turned out true for me as well, because I need a lot of services which are only for CI/CD like deploying or special test containers which are just doing cURL tests e.g..
I hope that helps and doesn't confuse you more ;)
Think of codeship-services.yml as a superset of docker-compose.yml, in the sense that codeship-services.yml has additional options that Docker Compose doesn't provide. Other than that, they are totally identical. Both build images the same way, and both can start all containers at once.
That being said, I agree with Moema's answer that it is often better to have both files in your project and optimize each of them for their environment. Caching, for example, can only be configured in codeship-services.yml. For our images, caching makes a huge difference for build times, so we definitely want to use it. And just like Moema, we need a lot of auxiliary tools on CI that we don't need locally (AWS CLI, curl, test frameworks, ...). On the other hand, we often run multiple instances of a service locally, which is not necessary on Codeship.
Having both files in our projects makes it much easier for us to cover the different requirements of CI and local development.
When trying to move a web container (Tomcat) to the latest technologies for better growth and support, I came across this blog. This part seems ideal for my needs:
... we are also incorporating Kubernetes into Mesos to manage the deployment of Docker workloads. Together, we provide customers with a commercial-grade, highly-available and production-ready compute fabric.
Now, how to setup a local test environment to try this out? All these technologies seem interchangable! I can run docker on mesos, mesos on docker, etc etc etc. Prepackaged instances allow me to run on others Clouds. Other videos also make this seem great! Running out on the cloud is not a viable (allowed) option for me. Unfortunately, I can not find 'instructions' on how to setup the configuration described/marketed/advertised.
If I am new to these technologies, and know there will be a learning curve, is there a way to get initialized for doing such a "simple task": running a tomcat container on a Docker machine that is running Mesos/Kubernetes? That is, without spending days trying to learn and figure out each individual part! This is the picture from the blog site referenced:
Assuming that I "only" know how to create a docker container(s) (for say, centos-7). What commands, in what order, (i.e. the secret 'code') do I need to use to configure small (2 or 3) local environment to try out running Tomcat?
Although I searched quite a bit, apparently not enough! Someone pointed me to this:
https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/mesos-docker.md
which is pretty close to exactly what I was looking for.
I am very new to Docker and currently trying to get my head around if there is any best practice guide to update software that runs inside a docker container in a very large distributed environment. I already found couple of posts around updating a MySQL database in docker, etc. It gives a good hint for any software that stores data, but what if you want to update other parts or your own software package or services that are distributed and used by several other docker images through docker-compose?
Is there someone with real life experience doing that in such an environment who can help me or other newbies to understand the best practices in docker if there are any.
Thanks for your help!
You never update software in a running container. You pull down a new version from the hub. If we assume you're using the latest tag (which is a bad idea, always pin your versions) of your image and it's one of the official library images or the publicly available that uses automated builds you'll get the latest version of the container image when you pull the image.
This assume you've also separated the data out of your container either as a host volume or using the data container pattern.
The container should be considered immutable, if you change it's state it's no longer a true version of the image.