Mesos, Marathon, Docker, Wildfly - docker

I have 3 node Mesos cluster with Marathon framework. On slaves i have Docker and I want deploy few Wildfly instances on one node.
How can i deploy few instances of Wildfly docker containers on one slave Mesos node by Marathon?

deploying a docker container using marathon is usually straight forward.
Do I understand correctly that want to deploy several containers onto a single slave? In that case you should look at marathon's contraints.

Related

How to balance containers on newly added node with same elastic IP?

I need help in distributing already running containers on the newly added docker swarm worker node.
I am running docker swarm mode on docker version - 18.09.5. I am using AWS autoscaling for creating 3 masters and 4 workers. For high availability, if one of the workers goes down, all the containers from that worker node will be balanced on other workers. When autoscaling brings new node up, I am adding that worker node to the current docker swarm setup using some automation. But docker swarm is not balancing containers on that worker node. Even I tried to deploy the docker stack again, still swarm is not balancing the containers. Is it because of different node id? How can I customize it? I am using docker compose file deploying stack.
docker stack deploy -c dockerstack.yml NAME
The only (current) to force re-balancing, is to force-update the services. See https://docs.docker.com/engine/swarm/admin_guide/#force-the-swarm-to-rebalance for more information.

How do I create a local kubernates cluster in a VM?

I have a set of docker images running in a Kubernates cluster on GKE. I have a Jenkins server running on a VM in GKE.
I have docker builds and GKE deploys running on the Jenkins server, but I would like to start up a 'local' cluster on the Jenkins server after successful builds, run my dockers in that cluster, run my tests towards the cluster, and then close down the local cluster before deploying the docker images to GKE.
I know about minikube, but they state that you are not able to run nested VM's, and I wonder if this blocks my dream of test my cluster before deploying it?
Do I have to run my local cluster on a physical server to be able to run my tests, or is there a solution to my problem?
Have you considered using kubeadm?
You can run a Kubernetes cluster within your Jenkins VM. Setup is a bit different than minikube and it's still in beta but it will let you test your cluster before the final deployment.

How to Distribute Jenkins Slave Containers Within Docker Swarm

I would like to have my Jenkins master (not containerized) to create slaves within a container. So I have installed the docker plugin into jenkins, created a docker server, configured and jenkins does indeed spin up a slave container fine after the job creation.
However, after I have created another docker server and created a swarm out of two of them and tried running jenkins jobs again it have continued to only deploy containers on the original server(which is now also a manager). I'd be expecting the swarm to balance the load and to distribute the newly created containers evenly across the swarm. What am I missing?
Do I have to use a service perhaps?
Docker images by themselves are not load balanced even if deployed in a swarm. What you're looking for would indeed be a Service definition. Just be careful because of port allocation. If you're deploying your jenkins slaves to listen on port 80, etc, all swarm hosts will listen on port 80 and mesh route to the containers.
Basically means you couldn't deploy anything else to port 80 on those hosts. Once that's done, however, any requests to the hosts would be load balanced to the containers.
The other nice thing is that you can dynamically change the number of replicas with service update
docker service update JenkinsService --replicas 42
While 42 may be extreme, you could obviously change it :)
As of that moment there was nothing I could find in the swarm that would help me to manipulate container distribution across the swarm nodes.
Ended up using a more flexible kubernetes for that purpose. I think marathon is capable of that as well.

Docker Containers on Marathon disappeared

I have had been having great success so far using Mesos, Marathon, and Docker to manage a fleet of servers, and the containers I'm placing on them.
However, due to some problems I had to destroy my MesosMaster machine and now when I started the machine again and connected it with zookeeper cluster, I dont see anymore docker containers on marathon.
Those docker containers are running since they run on MesosSlaves and I can see them running when I ssh into those slave servers. But they no longer appear on Marathon GUI.
Now I can go back and start all the containers again on marathon GUI manually(takes alot of time) but I think that there has to be another smarter way to it.
Can somebody help me out here because my understanding of the logic of mesos, zookeeper tells me that my above actions should not have implications and mesos cluster should be smart enough to see the docker containers.

Mesos + Docker, do I automatically get the benefits of Mesos HA, etc.?

Mesos now supports Docker. If I run Docker as an Executor, do I still get some of the high-availability and scheduling benefits of Mesos? Or do I have to run Docker tasks e.g. within Marathon to get this?
What would be the benefit of using Mesos + (native) Docker instead of just plain Docker without Mesos? I understand the idea of using Mesos + Marathon + Docker tasks because I get the HA and failover benefits.
Mesos natively supports Docker images as executors within your framework. The benefit of this is that you can now deploy a docker container without having to be aware of the internal topology of your server cluster. With just Docker, you have to connect to the exact remote host and do system-specific configuration.
As for HA, you will need to use a meta-framework (like Marathon, Docker Swarm, etc) in order to monitor your instances and configure redundancy and fault-tolerance. With Marathon this is trivially easy, and works by default.

Resources