Let's say I have a Java EE application which requires a database + I would also like to use apache.
Now, is it better to make a single image containing all three pieces or 3 containers for each of these and use the docker networking (linking is deprecated, right?) to connect them?
You can also use built-in Docker swarm mode. This gives you built in encryption for passing your secrets around, such as the database login. Here's an official Docker sample app that shows how to do a Java Spring Boot app connecting to a database with each service separated.
Docker is a lightweight solution for isolating applications. So if you have 3 different applications, you will almost always run those in 3 separate containers. Some of the advantages that gives you are:
The ability to independently scale each component
The ability to run components on different hosts
The ability to independently upgrade one component without impacting the others
The only time I merge application components into a single container is when they cannot communicate through a networking API, and they really need filesystem- and process-level integration between the parts.
Related
Can somebody explain it with some examples? Why multi-container docker apps are built? while you can contain your app in a single docker container.
When you make a multi-container app you have to do networking. Is not it easy to run a single image of a single container rather than two images of two containers?
There are several good reasons for this:
It's easier to reuse prebuilt images. If you need MySQL, or Redis, or an Nginx reverse proxy, these all exist as standard images on Docker Hub, and you can just include them in a multi-container Docker Compose setup. If you tried to put them into a single image, you'd have to install and configure them yourself.
The Docker tooling is built for single-purpose containers. If you want the logs of a multi-process container, docker logs will generally print out the supervisord logs, which aren't what you want; if you want to restart one of those containers, the docker stop; docker rm; docker run sequence will delete the whole thing. Instead with a multi-process container you need to use debugging tools like docker exec to do anything, which is harder to manage.
You can upgrade one part without affecting the rest. Upgrading the code in a container usually involves building a new image, stopping and deleting the old container, and running a new container from the new image. The "deleting the old container" part is important, and routine; if you need to delete your database to upgrade your application, you risk losing data.
You can scale one part without affecting the rest. More applicable in a cluster environment like Docker Swarm or Kubernetes. If your application is overloaded (especially in production) you'd like to run multiple copies of it, but it's hard to run multiple copies of a standard relational database. That essentially requires you to run these separately, so you can run one proxy, five application servers, and one database.
Setting up a multi-container application shouldn't be especially difficult; the easiest way is to use Docker Compose, which will deal with things like creating a network for you.
For the sake of simplification, I would say you can run only one application with a public entry point (like API) in a single container. Actually, this approach is recommended by Docker official documentation.
Microservices
Because of this single constraint, you cannot run microservices that require their own entry points in a single docker container.
It could be more a discussion on the advantages of Monolith application vs Microservices.
Database
Even if you decide to run the Monolith application only, still you need to connect some database there. As you noticed, Docker has an additional network-configuration layer, so if you want to run Database and application locally, the easiest way is to use docker-compose to run both images (Database and your Application) inside one, automatically configured network.
# Application definition
application: <your app definition>
# Database definition
database:
image: mysql:5.7
In my example, you can just connect to your DB via https://database:<port> URL from your main app (plus credentials eventually) and it will work.
Scalability
However, why we should split images for the database from the application? One word - scalability. For development purposes, you want to have your local DB, maybe with docker because it is handy. For production purposes, you will put the application image to run somewhere (Kubernetes, Docker-Swarm, Azure App Services, etc.). To handle multiple requests at the same time, you want to run multiple instances of your application. However what about the database? You cannot connect to the internal instance of DB hosted in the same container, because other instances of your app in other containers will have a completely different set of data (without synchronization).
Most often you are electing to use a separate Database server - no matter if running it on the container or fully manged databases (like Azure CosmosDB or Mongo Atlas), but with your own configuration, scaling, and synchronization dedicated for DB only. Your app just needs to worry about the proper URL to that. Most cloud providers are exposing such services out of the box, so you are not worrying about the configuration by yourself.
Easy to change
Last but not least argument is about changing the initial setup overtime. You might change the database provider, or upgrade the version of the image in the future (such things are required from time to time). When you separate images, you can modify one without touching others. It is decreasing the cost of maintenance significantly.
Also, you can add additional services very easy - different logging aggregator? No Problem, additional microservice running out-of-the-box? Easy.
I have a microservice with about 6 seperate components.
I am looking to sell instances of this microservice to people who need dedicated versions of it for their project.
Docker seems to be the solution to doing this as easily as possible.
What is still very unclear to me is, is it possible to use docker to deploy whole instances of microservices within a cloud service like GCP or AWS?
Is this something more specific to the Cloud provider itself?
Basicly in the end, I'd like to be able to, via code, start up a whole new instance of my microservice within its own network having each component be able to speak to eachother.
One big problem I see is assigning IP's to the containers so that they will find each other, independent of which network they are in. Is this even possible or is this not yet feasible with current cloud techonology?
Thanks a lot in advance, I know this is a big one...
They is definitely feasible and is nowadays one of the most popular ways to ship and deploy applications. However, the procedure to deploy varies slightly based on the cloud provider that you choose.
The good news is that the packaging of your microservices with Docker is independent from the cloud provider you use. You basically need to package each component in a Docker image, and deploy these images to a cloud platform.
All popular cloud platforms nowadays support deployment of docker containers. You can use in addition popular frameworks such as Docker swarm or Kubernetes on these cloud platforms to orchestrate the microservices deployment.
I am exploring use of containers in a new application and have looked at a fair amount of content and created a sandbox environment to explore docker and containers. My struggle is more an understanding what components needs to be containerized individually vs bundling multiple components into my own container. And what points to consider when architecting this?
Example:
I am building a python back end service to be executed via webservice call.
The service would interact with both Mongo DB, and RabbitMQ.
My questions are:
Should I run individual OS container (EG Ubuntu), Python Container, MongoDB Container, Rabbit MQ container etc? Combined they all form part of my application and by decoupling everything I have the ability to scale individually?
How would I be able to bundle/link these for deployment without losing the benefits of decoupling/decomposing into individual containers
Is an OS and python container actually required as this will all be running on an OS with python anyways?
Would love to see how people have approached this problem?
Docker's philosophy: using microservices in containers. The term "Microservice Architecture" has sprung up over the last few years to describe a particular way of designing software applications as suites of independently deployable services.
Some advantages of microservices architecture are:
Easier upgrade management
Eliminates long-term commitment to a single technology stack
Improved fault isolation
Makes it easier for a new developer to understand the functionality of a service
Improved Security
...
Should I run individual OS container (EG Ubuntu), Python Container,
MongoDB Container, Rabbit MQ container etc? Combined they all form
part of my application and by decoupling everything I have the ability
to scale individually?
You dont need an individual OS ontainer. Each container will use Docker host's kernel, and will contain only binaries required, python binaries for example.
So you will have, a python container for you python service, MongoDB container and RabbitMQ container.
How would I be able to bundle/link these for deployment without losing
the benefits of decoupling/decomposing into individual containers?
For deployments, You will use dockerfiles + docker-compose file. Dockerfiles include instructions to create a docker image. If you are just using official library images, you don't need dockerfiles.
docker-compose will help you orchestrate the container builds (from docker files), start ups, Creating required networks, Mounting required volumes and etc.
We have multiple spring boot applications where one provide input to other.
As of now we are deploying it in 3 different VM and connect each other.
Is it advisable to make all these 3 into a single docker image?
Say, if I am able to make it into single docker image, it is easy for me to provide this images for different team.
As per the memory need, it is OK to be part of single image, I did that analysis.
If they are tightly coupled and can be managed under one process, yes.
You need to make sure you can stop the all system properly (to avoid zombie processes: see "Use of Supervisor in docker")
But the idea behind containers remains to isolate each components of your system in its own container, which facilitate debugging (when one part misbehave), upgrade, logging and monitoring.
You can experiment with multi-container apps by using docker bundles (dab) in order to facilitate the distribution of your multi-container application.
To help you with DAB, see:
"Deploy Docker Compose (v3) to Swarm (mode) Cluster"
"My first try with DAB (aka Distributed Application Bundle)"
"Docker Services, Stack and Distributed Application Bundle"
Distributed Application Bundle, or DAB, is a new concept introduced in Docker 1.12, is a portable format for multiple containers. Each bundle can be then deployed as a Stack at runtime. Let's use a Docker Compose file, create a DAB from it, and deploy it as a Docker Stack.
https://blog.couchbase.com/2016/july/docker-services-stack-distributed-application-bundle
I am trying to understand how docker can be used to dockerize multilayered application.
My tomcat application needs mongodb, mysql, redis, solr and rabbitmq. I am playing with Docker for couple of weeks now. I am able to install and use mongo/mysql containers. But I am not getting how can I completely ship application using Docker. I have few questions.
How should the images be. Should I have one image that has all the components installed or have separate images (like one for tomcat, one for mongo, one for mysql etc) and start those containers using a bash script outside of docker.
What is the docker way of maintaining multiple containers at once. Meaning say I have multiple containers (like mongo, mysql, tomcat etc...) that needs to be worked together to run my application, Is there any inbuilt way of dealing this so that one command/script does this?
Suppose I dockerize my application, how can i manage various routine tasks that need to be performed like incremental code deployment, database patches etc. Currently we are using vagrant, we also use fabric along with vagrant for various tasks.Like after vagrant up we use fab tasks for all kind of routine things like code deployment, db refresh, adding volumes, start/stop services etc. What would be the docker's way of doing this?
With Vagrant if VM crashes due to High CPU etc. host system is not affected. But I see docker is eating up lot of host resources. Can we put limits for that say not more than one cpu core for that container etc..?
Because we use vagrant, most of the questions above are in that context. When started with docker I thought docker as a kind of visualization technology that can be a replacement for our huge Vagrant based infra. Please correct me if I am wrong?
I advise you to look at docker-compose:
you'll be able to define an architecture of your application
you can then easily build it and run it (with one command)
pretty much same setup for dev and prod
For microservices, composition etc I won't repost on this.
For containet resource allocation:
Docker run has various resource control options (using google cgroups) see my gist here
https://gist.github.com/afolarin/15d12a476e40c173bf5f