Mesosphere inter-service communication using Marathon - docker

I'm currently looking into Mesosphere DCOS to run multiple micro-services using Docker containers. Each micro-services code is already built by my CI into a docker container and uploader to a private container repo.
If I now deploy container A and container B as two different apps using Marathon, how would app A be able to reach app B?
Do I need additional service discovery like Consul?
Would be great if I could have some insights here and maybe even some links / docu to get me started :)

The current solution would be to use some kind service discovery.
DCOS already comes with MesosDNS and it will automatically create an DNS entry for each of your containers started by marathon.
See here for details on using MesosDNS on DCOS.
Hope this helped!
BTW: Feel free to contact the DCOS support directly via the little chat icon in the DCOS UI.

Related

How to create a single project out of multiple docker images

I have been working on a project where I have had several docker containers:
Three OSRM routing servers
Nominatim server
Container where the webpage code is with all the needed dependencies
So, now I want to prepare a version that a user could download and run. What is the best practice to do such a thing?
Firstly, I thought maybe to join everything into one container, but I have read that it is not recommended to have several processes in one place. Secondly, I thought about wrapping up everything into a VM, but that is not really a "program" that a user can launch. And my third idea was to maybe, write a script, that would download each container from Docker Hub separately and launch the webpage. But, I am not sure if that is best practice, or maybe there are some better ideas.
When you need to deploy a full project composed of several containers.
You may use a specialized tool.
A well known for mono-server usage is docker-compose:
Compose is a tool for defining and running multi-container Docker applications
https://docs.docker.com/compose/
You could provide to your users :
docker-compose file
your application docker images (ex: through docker hub).
Regarding clusters/cloud, we talk more about orchestrator like docker swarm, Kubernetes, nomad
Kubernetes's documentation is the following:
https://kubernetes.io/

Does it makes sense to manage Docker containers of a/few single hosts with Kubernetes?

I'm using docker on a bare metal server. I'm pretty happy with docker-compose to configure and setup applications.
Still some features are missing, like configuration management and monitoring maybe there are other solutions to solve this issues but I'm a bit overwhelmed by the feature set of Kubernetes and can't judge if it would help me here.
I'm also open for recommendations to solve the requirements separately:
Configuration / Secret management
Monitoring of my docker hostes applications (e.g. having some kind of dashboard)
Remot container control (SSH is okay with only one Server)
Being ready to scale my environment (based on multiple different Dockerized applications) to more than one server in future - already thinking about networking/service discovery issues with a pure docker-compose setup
I'm sure Kubernetes covers some of these features, but I have the feeling that it's too much focused on Cloud platforms where Machines are created on the fly (since I only have at most few bare metal Servers)
I hope the questions scope is not too broad, else please use the comment section and help me to narrow down the question.
Thanks.
I think the Kubernetes is absolutely much your requests and it is what you need.
Let's start one by one.
I have the feeling that it's too much focused on Cloud platforms where Machines are created on the fly (since I only have at most few bare metal Servers)
No, it is not focused on Clouds. Kubernates can be installed almost on any bare-metal platform (include ARM) and have many tools and instructions which can help you to do it. Also, it is easy to deploy it on your local PC using Minikube, which will prepare local cluster for you within VMs or right in your OS (only for Linux).
Configuration / Secret management
Kubernates has a powerful configuration and management based on special objects which can be attached to your containers. You can read more about configuration management in that article.
Moreover, some tools like Helm can provide you more automation and range of preconfigured applications, which you can install using a single command. And you can prepare your own charts for it.
Monitoring of my docker hostes applications (e.g. having some kind of dashboard)
Kubernetes has its own dashboard where you can get many kinds of information: current applications status, configuration, statistics and many more. Also, Kubernetes has great integration with Heapster which can be used with Grafana for powerful visualization of almost anything.
Remot container control (SSH is okay with only one Server)
Kubernetes controlling tool kubectl can get logs and connect to containers in the cluster without any problems. As an example, to connect a container "myapp" you just need to call kubectl exec -it myapp sh, and you will get sh session in the container. Also, you can connect to any application inside your cluster using kubectl proxy command, which will forward a port you need to your PC.
Being ready to scale my environment (based on multiple different Dockerized applications) to more than one server in future - already thinking about networking/service discovery issues with a pure docker-compose setup
Kubernetes can be scaled up to thousands of nodes. Or can have only one. It is your choice. Independent of a cluster size, you will get production-grade networking, service discovery and load balancing.
So, do not afraid, just try to use it locally with Minikube. It will make many of operation tasks more simple, not more complex.

Using RabbitMQ in for communication between different Docker container

I want to communicate between 2 apps stored in different docker containers, both part of the same docker network. I'll be using a message queue for this ( RabbitMQ )
Should I make a 3rd Docker container that will run as my RabbitMQ server, and then just make a channel on it for those 2 specific containers ? So that later on I can make more channels if I need for example a 3rd app that needs to communicate with the other 2?
Regards!
Yes, it is the best way to utilize containers, and it will allow you to scale, also you can use the official RabbitMQ container and concentrate on your application.
If you started using containers, than it's the right way to go. But if you your app is deployed in cloud (AWS, Azure and so on) it's better to use cloud queue service which is already configured, is updated automatically, has monitoring and so on.
I'd like also to point out that docker containers it's only a way to deploy your application components. Application shouldn't take care about how your components (services, dbs, queues and so on) are deployed. For app service a message queue is simply a service located somewhere, accessible by connection parameters.

What is Container as a Service

What is a Container as a Service (CaaS) means in general terminology. I found Kubernetes, Docker provide these services. But, what does that mean?
Does Container mean it provides the different OS platform to deploy our code to work?
It means, as seen in dockercon here, that docker provides a set of services (a Service Platform) around containers for:
building,
shipping and
running:
Building and shipping can be either:
to a data center
to a cloud:
It means you combine Iaas and PaaS into a Caas: Infrastructure + Platform.
(Source: Hyper.sh blog, currently unavailable, from Thibault Bronchain)
The term CaaS was seen in GOTO conference: Patterns for Docker Success • Simon Eskildsen (video).
You take your container and put them in that service and you don't care about the server and network structure behind them. For that Google use Kubernetes. So if you want yes. You can deploy your container on different services if you want.
On AWS you can do the same and deploy your container with the AWS Container service.
https://aws.amazon.com/de/documentation/ecs/
In short words: CaaS allows any Docker container to run on provider platform.

Docker, Registrator and Consul by example

I am new to both Docker and Consul, and am trying to get a feel for how containerized apps could use Consul for both service registry and KV pair config management ("configuration").
My understanding was that I could:
Create an image that runs Consul server, so something like this; then
Spin up three of these Docker-Consul containers (thus forming a cluster/quorum) on myvm01.example.com (an Ubuntu VM); then
Refactor my app to use Consul and create a Docker image that runs my app and Consul agent, with the agent configured to join the 3-node quorum at startup. On startup, my app uses the local Consul agent to pull down all of its configurations, stored as KV pairs. It also pulls in registered/healthy services, and uses a local load balancing tool to balance the services it integrates with.
Run my app's containers on, say, myvm02.example.com (another Ubuntu VM).
So to begin with, if any of this seems like I am misunderstanding the normal/proper uses of Docker and Consul (sans Registrator), please begin by correcting me!
Assuming I'm more or less correct, I recently stumbled across Registrator and am now even more confused. Registrator seems to be some middleman between your app containers and your Consul (or whatever registry you use) servers.
After reading their Quickstart tutorial, it sounds like what you're supposed to do is:
Deploy my Consul cluster/quorum containers to myvm01.example.com like before
Instead of "Dockerizing" my app to use Consul directly, I simply integrate it with Registrator
Then I deploy a Registrator container somewhere, and configure it to integrate with Consul
Then I deploy my app containers. They integrate with Registrator, and Registrator in turn integrates with Consul.
My concerns:
Is my understanding here correct or way off base? If so, how?
What is actually gained by the addition of Registrator. It doesn't seem (to the untrained eye at least) like anything more than a layer of indirection between the app and the service registry.
Will I still be able to leverage Consul's KV config service through Registrator?
Is my understanding here correct or way off base? If so, how?
It seems to me, that it's not a good solution, to have all cluster/quorum members running inside the same VM. It's not so bad if you use it for development or tetsing or something, where you don't care much about reliability, but not for production.
Once your VM dies, you'll loose all the advantages you have by creating a cluster. And even more, you can loose all the data you have in K/V store, because you are running Consul servers inside a docker containers, which should be additionaly configured to share the configuration between runs.
As for the rest, I see it the same as you.
What is actually gained by the addition of Registrator.
From my point of view, the main thing is, that you don't have to provide an instance of Consul Agent in every container you run. And the container with the image you run is responsible only for their main functions, not for registering itself somewhere. You may simply pull an image and just run a container with it, to make it's service available, without making additional work.
Will I still be able to leverage Consul's KV config service through Registrator?
Unfortunately, no. At least, we didn't find a solution to use it this way, when we were looking for something to make service discovering and configuration management. We came to conclusion, that Registrator is not a proxy for K/V store and is used only to automate service discovery. So you have to use some other logic to access consul's K/V store.
Update: furthermore, here is 2 articles: "Automatic Docker Service Announcement with Registrator" and "Automatic container registration with Consul and Registrator", I found usefull to understand Registrator role in service discovery process.

Resources