What's the best practice to set-up SCDF server for Failover? I am talking about SCDF server itself. Not the stream and Tasks that you deploy in SCDF. I am planning to use Kubernetes as the runtime.
Depending on the runtime platform of your choice, you can scale multiple SCDF server instances and with a load-balancer in front, you'd be able to route the traffic among server instances. The idea is that you'd have a backup (server) instance to serve the traffic under failure scenarios.
That said, in Kubernetes, the replication controller keeps track of the pod that's running the SCDF server and upon failure, it automatically creates a new pod to re-establish the server operation, anyway.
Similar capability is also available for PCF, Mesos, and Yarn implementations of SCDF.
Related
I am newbie in SCDF. I have few micro services running behind Spring Cloud platform. Each services got multiple nodes. Can we use those existing services into SCDF platform as either SOURCE, PROCESSOR or SINK? If so, how would I get them into dashboard as they are already deployed as services!
SCDF doesn't probe on a given K8s cluster/namespace to automatically build the streaming data pipelines.
Today, it is imperative that the streaming/task "definitions" are created and deployed in SCDF first, and only then it is possible to monitor, scale, and manage the applications.
In case it wasn't apparent already, SCDF can only orchestrate the deployment and the management for event-streaming and batch/task Spring Boot applications. Not all kinds of application workloads are possible.
I am designing the architecture of my software instance provisioning system. For this I will use kubernetes in such a way that each client will have their namespace with the pods of their integrity. However, the kubernetes cluster will have a common entry point to all the instances, which will be a nginx server.
My question is as follows, as software provided allows upload of files, in case at the same time several of my clients decide to upload a file simultaneously, I run the risk that the server nginx is overloaded and that nobody can access your instance hired?
Is there any good practice to try to design my architecture?
You could use nginx ingress controller and deploy it with multiple replicas so that it can be scaled up to handle the load. Then your nginx is part of the cluster (rather than a separate server) and can take advantage of the kubernetes cluster's capacity for horizontal scaling.
Unless you are running on-prem with NodePort/HostPort - then you might want to run your nginx as an external Load balancer as in that case you don't have one from a cloud provider. Then what you can do is configure rate-limiting and throttling in nginx. If cloud then you can also use annotations to do this with nginx ingress.
We're looking at using Spring Cloud Task / Spring Cloud Data Flow for our batch processing needs as we're modernising from a legacy system. We don't want or need the whole microservices offering ... we want to be able to deploy jobs/tasks, kick off batch processes, have them log to a log file, and share a database connection pool and message queue. We don't need the whole PaaS that's provided by Spring Cloud Foundry, and we don't want to pay for that, but we do want the Data Flow / Task framework to be commercially supported. Is such an option available?
Spring Cloud Data Flow (SCDF) builds upon spring-cloud-deployer abstraction to deploy stream/task workloads to a variety of runtimes including Cloud Foundry, Kubernetes, Mesos and Yarn - see this visual.
You'd need a runtime for SCDF to orchestrate these workloads in production setting. If there's no scope for cloud infrastructure, the YARN based deployment could be a viable option for standalone bare-metal installation. Please review the reference guide and Apache Ambari provisioning tools for more details. There's a separate commercial support option available for this type of installation.
I’m using Amazon ECS to auto deploy my containers on uat/production.
What is the best way to do that?
I have a REST api with a several front-end clients
Should I package my api container with nginx in the same container?
And do the same thing with the others front end clients.
Or I have to write a big task definition to bring together all my containers(db, nginx, php, api, clients) :(, but that's mean that I should redeploy all my infrastructure at each push uat/prod
I'm very confusing.
I would avoid including too much in a single container. Try and distill your containers down to one process doing one thing. If all you're doing is serving up a REST API for consumption by your front end, just put the essential pieces in for that and no more.
In my experience you also want your ECS tasks to be able to handle failure gracefully and restart, and the more complicated your containers are the harder this is to get right.
Depending on your requirements I would look into using ELB instead of nginx, you can have your ECS cluster point at an ELB and not have to deal with that piece at all.
Do not use ECS - it's too crude. I was using it as a platform for our staging/production environments and had odd problems during deployments - sometimes it worked well, sometimes - not (with the same Docker images). ECS provides not clear model of container deployment and maintenance.
There is another good, stable and predictive option - Docker Cloud service. It's new tool (a.k.a. Tutum) that was acquired by Docker. I switched the CI/CD to use it and we're happy with it.
Bind Amazon user credentials to Docker Cloud account. Docker Cloud uses AWS (or other provider) API for creating appropriate computer instances.
Create Node. Select Amazon EC2 instance type and parameters of storage, security group and so on. New instance will contain installed docker software and managing container that handles messages from Docker Cloud (deploy, destroy and others).
Create Stackfile, see https://docs.docker.com/docker-cloud/apps/stack-yaml-reference/. Stackfile is a definition of container group you required. You can define different scaling/distribution models for your containers using specific Stackfile options like deployment strategy, see https://docs.docker.com/docker-cloud/apps/stack-yaml-reference/#deployment-strategy-1.
Define ELB configurations in AWS for your new instances.
P.S. I'm not a member of Docker team and I like other AWS services :).
Here is my two cents on the topic, the question is not really related to ecs, it applies to any body deploying their apps on docker.
I would suggest separating the containers, one for nginx and one for API.
if they need to be co-located on the same instance, on ECS you can define them as part of the same task and on kubernetes you can make them part of same pod.
Define a docker link between the nginx and the api container. This will allow the nginx process to talk to api container without the api container exposing its ports to the host.
One advantage of using the container running platforms such as kubernetes and ecs is that they ensure each of the container run all the time and dynamically restart if one of the processes/containers go down.
Separating the containers will allow these platforms to monitor both the processes separately. When you combine the two into one container the docker container can only run with one of the processes in foreground, so you will loose the advantage of auto-healing for one of the processes.
Also moving from nginx to ELB is not a straightforward solution, you may have redirections and other things configured on the nginx, which are not available on ELB(As of date).
If you also need the ELB, there is no harm in forwarding the requests from the ELB to the nginx port.
Recently I came across Mantl ( microservices infrastructure management project by Cisco). Its an opensource one and they have pushed it on github. I didn't understood their basic working. Does anyone have any idea about that ?
From my understanding, Mantl is a collection of tools/applications that ties together to create a cohesive docker-based application platform. Mantl is ideally deployed on virtualized/cloud environments (AWS, OpenStack, GCE), but I have just recently able to deploy it on bare-metal.
The main component in Mantl is Mesos, which manages dockers, handles scheduling and task isolation. Marathon is a mesos framework that manages long running tasks, such as web services, this is where most application reside. The combination of mesos-marathon handles application high-availability, resiliency and load-balancing. Tying everything together is consul, which handles service discovery. I use consul to do lookups for each application to communication to each other. Mantl also includes the ELK stack for logging, but I haven't had any success in monitoring any of my applications, yet. There is also Chronos, where scheduled tasks are handles ala cron. Traefik acts as a reverse-proxy, where application/service endpoints are mapped to URLs for external services to communicate.
Basically, your microservices should be self-contained in docker images, initiate communications via consul lookup and logs into standard io. Then you deploy your app, using the Marathon API, and monitor it in Marathon UI. When deploying your dockerized-app, marathon will register you docker image names in consul, along with its' exposed port. Scheduled tasks should be deployed in Chronos, where you will be able to monitor running tasks and pending scheduled tasks.