Recently I came across Mantl ( microservices infrastructure management project by Cisco). Its an opensource one and they have pushed it on github. I didn't understood their basic working. Does anyone have any idea about that ?
From my understanding, Mantl is a collection of tools/applications that ties together to create a cohesive docker-based application platform. Mantl is ideally deployed on virtualized/cloud environments (AWS, OpenStack, GCE), but I have just recently able to deploy it on bare-metal.
The main component in Mantl is Mesos, which manages dockers, handles scheduling and task isolation. Marathon is a mesos framework that manages long running tasks, such as web services, this is where most application reside. The combination of mesos-marathon handles application high-availability, resiliency and load-balancing. Tying everything together is consul, which handles service discovery. I use consul to do lookups for each application to communication to each other. Mantl also includes the ELK stack for logging, but I haven't had any success in monitoring any of my applications, yet. There is also Chronos, where scheduled tasks are handles ala cron. Traefik acts as a reverse-proxy, where application/service endpoints are mapped to URLs for external services to communicate.
Basically, your microservices should be self-contained in docker images, initiate communications via consul lookup and logs into standard io. Then you deploy your app, using the Marathon API, and monitor it in Marathon UI. When deploying your dockerized-app, marathon will register you docker image names in consul, along with its' exposed port. Scheduled tasks should be deployed in Chronos, where you will be able to monitor running tasks and pending scheduled tasks.
Related
I am newbie in SCDF. I have few micro services running behind Spring Cloud platform. Each services got multiple nodes. Can we use those existing services into SCDF platform as either SOURCE, PROCESSOR or SINK? If so, how would I get them into dashboard as they are already deployed as services!
SCDF doesn't probe on a given K8s cluster/namespace to automatically build the streaming data pipelines.
Today, it is imperative that the streaming/task "definitions" are created and deployed in SCDF first, and only then it is possible to monitor, scale, and manage the applications.
In case it wasn't apparent already, SCDF can only orchestrate the deployment and the management for event-streaming and batch/task Spring Boot applications. Not all kinds of application workloads are possible.
We are working with a dockerized kafka environment. I would like to know the best practices for deployments of kafka-connectors and kafka-streams applications in such scenerio . Currently we are deploying each connector and stream as springboot applications and are started as systemctl microservices . I do not find a significant advantage in dockerizing each kafka connector and stream . Please provide me insights on the same
To me the Docker vs non-Docker thing comes down to "what does your operations team or organization support?"
Dockerized applications have an advantage in that they all look / act the same: you docker run a Java app the same way as you docker run a Ruby app. Where as with an approach of running programs with systemd, there's not usually a common abstraction layer around "how do I run this thing?"
Dockerized applications may also abstract some small operational details, like port management for example - ie making sure all your app's management.ports don't clash with each other. An application in a Docker container will run as one port inside the container, and you can expose that port as some other number outside. (either random, or one to your choosing).
Depending on the infrastructure support, a normal Docker scheduler may auto-scale a service when that service reaches some capacity. However, in Kafka streams applications the concurrency is limited by the number of partitions in the Kafka topics, so scaling up will just mean some consumers in your consumer groups go idle (if there's more than the number of partitions).
But it also adds complications: if you use RocksDB as your local store, you'll likely want to persist that outside the (disposable, and maybe read only!) container. So you'll need to figure out how to do volume persistence, operationally / organizationally. With plain ol' Jars with Systemd... well you always have the hard drive, and if the server crashes either it will restart (physical machine) or hopefully it will be restored by some instance block storage thing.
By this I mean to say: that kstream apps are not stateless, web apps where auto-scaling will always give you some more power, and that serves HTTP traffic. The people making these decisions at an organization or operations level may not fully know this. Then again, hey if everyone writes Docker stuff then the organization / operations team "just" have some Docker scheduler clusters (like a Kubernetes cluster, or Amazon ECS cluster) to manage, and don't have to manage VMs as directly anymore.
Dockerizing + clustering with kubernetes provide many benefits like auto healing, auto horizontal scaling.
Auto healing: in case spring application crashes, kubernetes will automatically run another instances and will ensure required number of containers are always up.
Auto horizontal scaling: if you get burst of messages, yo can tune spring applications to auto scale up or down using HPA that can use custom metrics also.
I want to communicate between 2 apps stored in different docker containers, both part of the same docker network. I'll be using a message queue for this ( RabbitMQ )
Should I make a 3rd Docker container that will run as my RabbitMQ server, and then just make a channel on it for those 2 specific containers ? So that later on I can make more channels if I need for example a 3rd app that needs to communicate with the other 2?
Regards!
Yes, it is the best way to utilize containers, and it will allow you to scale, also you can use the official RabbitMQ container and concentrate on your application.
If you started using containers, than it's the right way to go. But if you your app is deployed in cloud (AWS, Azure and so on) it's better to use cloud queue service which is already configured, is updated automatically, has monitoring and so on.
I'd like also to point out that docker containers it's only a way to deploy your application components. Application shouldn't take care about how your components (services, dbs, queues and so on) are deployed. For app service a message queue is simply a service located somewhere, accessible by connection parameters.
I'm pretty new to Docker orchestration and managing a fleet of containers. I'm wanting to build an app that would give the user a container when they ran a command. What is the best tool and best way to accomplish this?
I plan on having a pool of CoreOS servers to run the containers on and I'm imagining the scheduler to have an API that I can just call to create the container.
Most of what I have seen with Nomad, Kubernetes, Docker Swarm, etc is how to provision multiple clusters of containers all doing the same thing. I'm wanting to be able to create a single container based on a users command and then be able to communicate with an API on that container. Anyone have experience with this?
I'd look at Kubernetes + the Jobs API (short lived) or Deployments (long lived)
I'm not sure exactly what you mean by command, but I'll assume its some sort of dev env triggered by a CLI, make-dev.
User triggers make-dev, which sends a webhook to your app sitting in front of the Jobs API, ideally doing rate-limiting and/or auth.
Your app takes the command, sanity checks it, then fires off a Job/Deployment request + an Ingress rule + Service
Kubernetes will schedule it out across your fleet of machines
Your app waits for the pod to start, then returns back the address of the API with a unique identifier (the same thing in the ingress rule) like devclusters.com/foobar123
User now accesses their service at that address. Internally Kubernetes uses the ingress and service to route the requests to your pod
This should scale well, and if your different environments use the same base container image, they should start really fast.
Plug: If you want an easy CoreOS + Kubernetes cluster plus a UI try https://coreos.com/tectonic
I plan on having a pool of CoreOS servers to run the containers on and I'm imagining the scheduler to have an API that I can just call to create the container
kubernetes comes with a RESTful API that you can use to directly create pods (the unit of work in kubernetes which contains one or more containers) within your cluster.
The command line utility kubectl also interacts with the cluster in the exact same way, via the api. There are client libraries written in golang, Java, and Python at the moment with others on the way to help communicate with the cluster.
If you later want a higher level abstraction to manage pods, update them and manage their lifetimes, looking at one of the controllers (replicaset, replication controller, deployment, statefulset) should help.
I’m using Amazon ECS to auto deploy my containers on uat/production.
What is the best way to do that?
I have a REST api with a several front-end clients
Should I package my api container with nginx in the same container?
And do the same thing with the others front end clients.
Or I have to write a big task definition to bring together all my containers(db, nginx, php, api, clients) :(, but that's mean that I should redeploy all my infrastructure at each push uat/prod
I'm very confusing.
I would avoid including too much in a single container. Try and distill your containers down to one process doing one thing. If all you're doing is serving up a REST API for consumption by your front end, just put the essential pieces in for that and no more.
In my experience you also want your ECS tasks to be able to handle failure gracefully and restart, and the more complicated your containers are the harder this is to get right.
Depending on your requirements I would look into using ELB instead of nginx, you can have your ECS cluster point at an ELB and not have to deal with that piece at all.
Do not use ECS - it's too crude. I was using it as a platform for our staging/production environments and had odd problems during deployments - sometimes it worked well, sometimes - not (with the same Docker images). ECS provides not clear model of container deployment and maintenance.
There is another good, stable and predictive option - Docker Cloud service. It's new tool (a.k.a. Tutum) that was acquired by Docker. I switched the CI/CD to use it and we're happy with it.
Bind Amazon user credentials to Docker Cloud account. Docker Cloud uses AWS (or other provider) API for creating appropriate computer instances.
Create Node. Select Amazon EC2 instance type and parameters of storage, security group and so on. New instance will contain installed docker software and managing container that handles messages from Docker Cloud (deploy, destroy and others).
Create Stackfile, see https://docs.docker.com/docker-cloud/apps/stack-yaml-reference/. Stackfile is a definition of container group you required. You can define different scaling/distribution models for your containers using specific Stackfile options like deployment strategy, see https://docs.docker.com/docker-cloud/apps/stack-yaml-reference/#deployment-strategy-1.
Define ELB configurations in AWS for your new instances.
P.S. I'm not a member of Docker team and I like other AWS services :).
Here is my two cents on the topic, the question is not really related to ecs, it applies to any body deploying their apps on docker.
I would suggest separating the containers, one for nginx and one for API.
if they need to be co-located on the same instance, on ECS you can define them as part of the same task and on kubernetes you can make them part of same pod.
Define a docker link between the nginx and the api container. This will allow the nginx process to talk to api container without the api container exposing its ports to the host.
One advantage of using the container running platforms such as kubernetes and ecs is that they ensure each of the container run all the time and dynamically restart if one of the processes/containers go down.
Separating the containers will allow these platforms to monitor both the processes separately. When you combine the two into one container the docker container can only run with one of the processes in foreground, so you will loose the advantage of auto-healing for one of the processes.
Also moving from nginx to ELB is not a straightforward solution, you may have redirections and other things configured on the nginx, which are not available on ELB(As of date).
If you also need the ELB, there is no harm in forwarding the requests from the ELB to the nginx port.