My problem statement for 1 click Cassandra Deployment,
setup multi-node Cassandra cluster (Autoscaling).
Once any changes are made to the cassandra config, it should automatically be deployed to the full cassandra cluster - maybe using Jenkins.
this will be on private cloud.
I have setup a single node cassandra cluster under Kubernetes using statefulset.
What I am looking for is some pointers or articles for automating the cassandra setup once any changes are made to the cassandra configs and committed to github or any repository,if that is possible and autoscaling of the cassandra cluster.
Thanks for any suggestions.
Related
I need to deploy Vespa on multiple instances (3) using docker. What configuration changes do I have to do in my application package or in docker so that I can run the admin node, container node, content node on separate instances?
Did you check out the Docker Swarm example in the documentation? For a more manual approach, you can look at the Multi-Node Quick Start for AWS EC2 as this shows an example for multi-node (instance) config that should be applicable in your case.
I have a set of docker images running in a Kubernates cluster on GKE. I have a Jenkins server running on a VM in GKE.
I have docker builds and GKE deploys running on the Jenkins server, but I would like to start up a 'local' cluster on the Jenkins server after successful builds, run my dockers in that cluster, run my tests towards the cluster, and then close down the local cluster before deploying the docker images to GKE.
I know about minikube, but they state that you are not able to run nested VM's, and I wonder if this blocks my dream of test my cluster before deploying it?
Do I have to run my local cluster on a physical server to be able to run my tests, or is there a solution to my problem?
Have you considered using kubeadm?
You can run a Kubernetes cluster within your Jenkins VM. Setup is a bit different than minikube and it's still in beta but it will let you test your cluster before the final deployment.
I would like to have my Jenkins master (not containerized) to create slaves within a container. So I have installed the docker plugin into jenkins, created a docker server, configured and jenkins does indeed spin up a slave container fine after the job creation.
However, after I have created another docker server and created a swarm out of two of them and tried running jenkins jobs again it have continued to only deploy containers on the original server(which is now also a manager). I'd be expecting the swarm to balance the load and to distribute the newly created containers evenly across the swarm. What am I missing?
Do I have to use a service perhaps?
Docker images by themselves are not load balanced even if deployed in a swarm. What you're looking for would indeed be a Service definition. Just be careful because of port allocation. If you're deploying your jenkins slaves to listen on port 80, etc, all swarm hosts will listen on port 80 and mesh route to the containers.
Basically means you couldn't deploy anything else to port 80 on those hosts. Once that's done, however, any requests to the hosts would be load balanced to the containers.
The other nice thing is that you can dynamically change the number of replicas with service update
docker service update JenkinsService --replicas 42
While 42 may be extreme, you could obviously change it :)
As of that moment there was nothing I could find in the swarm that would help me to manipulate container distribution across the swarm nodes.
Ended up using a more flexible kubernetes for that purpose. I think marathon is capable of that as well.
I have 3 node Mesos cluster with Marathon framework. On slaves i have Docker and I want deploy few Wildfly instances on one node.
How can i deploy few instances of Wildfly docker containers on one slave Mesos node by Marathon?
deploying a docker container using marathon is usually straight forward.
Do I understand correctly that want to deploy several containers onto a single slave? In that case you should look at marathon's contraints.
In most tutorials, presentations and demos, only stateless services are presented that are load balanced either via DNS (SkyDNS, skydock, etc.) or via reverse proxy, such as HAproxy or Vulcand, which are configured with etcd or ZooKeeper.
Is there a best practice for deploying a cluster of MariaDB and Redis using:
CoreOS + fleet + Docker; or
Mesos + Marathon + Docker
Any other cluster management solution
How can one configure a Redis cluster and a MariaDB cluster (Galera), when the host running Master may change?
https://github.com/sheldonh/coreos-vagrant/tree/master/redis
http://www.severalnines.com/blog/how-deploy-galera-cluster-mysql-using-docker-containers
After posting the question, I was lucky and came across a few repositories that have achieved what I am looking for:
Redis
https://github.com/mdevilliers/docker-rediscluster - A Redis cluster with two Redis instances and three Redis Sentinel monitors. If the Master fails, the Sentinels promote the Slave as a Master. Mark has also created a project that configures HAProxy to use the promoted Master - https://github.com/mdevilliers/redishappy
Percona/Galera cluster
An out-of-the-box working docker image - https://github.com/paulczar/docker-percona_galera
You could use CoreOS (or any other plattform where Docker can run) and Kubernetes with SkyDNS integration this would you allow to fetch the IP-address of the master. Also Kubernetes comes with a proxy (for service discovery) which sets environmental variables in your pods. You could access them at runtime. I think the best way (and a way you need to go) is to use a service discovery tool like SkyDNS or something similar. Here is a simple Kubernetes example.
Also you could do this with fleet and side-kicks but I think Kuberentes does somethings a little bit easier for you and is better to use. It is just a little bit tricky to set it up :)
I didn't used Mesos and Marathon so far but I think they should do this too. They (https://github.com/mesosphere/marathon#features) have all the tools you need to set your cluster up.