All,
I've searched this high and low but was not able to find a reliable answer. The question may be simple for some Pro's but please help me with this...
We have a situation where we need Jenkins to be able to access and build within Docker containers. The target Docker containers are built and instantiated with a separate docker-compose. What would be the best way of connecting Jenkins with Docker containers in each scenario's as below?
Scenario 1 : Jenkins is setup on host machine it self. 2 Docker containers instantiated using their own docker-compose file. How can Jenkins connect to the containers in this situation? Host cannot ping Docker containers since both are on different networks (Host on Physical and Docker containers on docker DNS) hence maybe no SSH as well?
Scenario 2 : We prefer Jenkins to be in its own container (with its own docker-compose) so as we can replicate the same on to other environments. How can Jenkins connect to the containers in this situation? Jenkins container cannot ping Docker containers even though I use the same network in both docker-compose files. Instead it creates additional bridge network on its own e.g. from 2nd scenario below, if I have network-01 in Docker-Compose 01 and if I mention the same in Docker-Compose 2, docker creates additional network for Compose 2. As a result, I cannot ping the Node/Mongo containers from the Jenkins container (so I guess no SSH either).
Note 1 : I'm exposing 22 on both docker images i.e. Node & Mongo...
Note 2 : Our current setup has Jenkins on the host machine with exposed docker volumes from the container to the host. Is this preferred approach?
Am I missing the big elephant in the room or the solution is complicated (should'nt be!)?
I need help in distributing already running containers on the newly added docker swarm worker node.
I am running docker swarm mode on docker version - 18.09.5. I am using AWS autoscaling for creating 3 masters and 4 workers. For high availability, if one of the workers goes down, all the containers from that worker node will be balanced on other workers. When autoscaling brings new node up, I am adding that worker node to the current docker swarm setup using some automation. But docker swarm is not balancing containers on that worker node. Even I tried to deploy the docker stack again, still swarm is not balancing the containers. Is it because of different node id? How can I customize it? I am using docker compose file deploying stack.
docker stack deploy -c dockerstack.yml NAME
The only (current) to force re-balancing, is to force-update the services. See https://docs.docker.com/engine/swarm/admin_guide/#force-the-swarm-to-rebalance for more information.
I am having a problem trying to implement the best way to add new container to an existing cluster while all containers run in docker.
Assuming I have a docker swarm, and whenever a container stops/fails for some reason, the swarm bring up new container and expect it to add itself to the cluster.
How can I make any container be able to add itself to a cluster?
I mean, for example, if I want to create a RabbitMQ HA cluster, I need to create a master, and then create slaves, assuming every instance of RabbitMQ (master or slave) is a container, let's now assume that one of them fails, we have 2 options:
1) slave container has failed.
2) master container has failed.
Usually, a service which have the ability to run as a cluster, it also has the ability to elect a new leader to be the master, so, assuming this scenerio is working seemlesly without any intervention, how would a new container added to the swarm (using docker swarm) will be able to add itself to the cluster?
The problem here is, the new container is not created with new arguments every time, the container is always created as it was deployed first time, which means, I can't just change it's command line arguments, and this is a cloud, so I can't hard code an IP to use.
Something here is missing.
Maybe trying to declare a "Service" in the "docker Swarm" level, will acctualy let the new container the ability to add itself to the cluster without really knowing anything the other machines in the cluster...
There are quite a few options for scaling out containers with Swarm. It can range from being as simple as passing in the information via a container environment variable to something as extensive as service discovery.
Here are a few options:
Pass in IP as container environment variable. e.g. docker run -td -e HOST_IP=$(ifconfig wlan0 | awk '/t addr:/{gsub(/.*:/,"",$2);print$2}') somecontainer:latest
this would set the internal container environment variable HOST_IP to the IP of the machine it was started on.
Service Discovery. Querying a known point of entry to determine the information about any required services such as IP, Port, ect.
This is the most common type of scale-out option. You can read more about it in the official Docker docs. The high level overview is that you set up a service like Consul on the masters, which you have your services query to find the information of other relevant services. Example: Web server requires DB. DB would add itself to Consul, the web server would start up and query Consul for the databases IP and port.
Network Overlay. Creating a network in swarm for your services to communicate with each other.
Example:
$ docker network create -d overlay mynet
$ docker service create –name frontend –replicas 5 -p 80:80/tcp –network mynet mywebapp
$ docker service create –name redis –network mynet redis:latest
This allows the web app to communicate with redis by placing them on the same network.
Lastly, in your example above it would be best to deploy it as 2 separate containers which you scale individually. e.g. Deploy one MASTER and one SLAVE container. Then you would scale each dependent on the number you needed. e.g. to scale to 3 slaves you would go docker service scale <SERVICE-ID>=<NUMBER-OF-TASKS> which would start the additional slaves. In this scenario if one of the scaled slaves fails swarm would start a new one to bring the number of tasks back to 3.
https://docs.docker.com/engine/reference/builder/#healthcheck
Docker images have a new layer for health check.
Use a health check layer in your containers for example:
RUN ./anyscript.sh
HEALTHCHECK exit 1 or (Any command you want to add)
HEALTHCHECK check the status code of command 0 or 1 and than result as
1. healthy
2. unhealthy
3. starting etc.
Docker swarm auto restart the unhealthy containers in swarm cluster.
I saw couple of tutorials on continuous deployment (on docker.com, on codecentric.de, on devopscube.com).
Overall I saw two approaches:
Set two types of jenkins server (master and slave). Master is in a docker container and slave on the host machine.
Jenkins server in docker container. They set up the link to the host and using that link the jenkins can create or recreate docker images.
In the first approach - I do not understand why they set up additional jenkins server residing inside the docker container. Is not it enough just to have jenkins server on host machine alongside with docker container?
The second approach seems to me a bit insecure because process from container is accessing host OS. Does it have any benefits?
Thanks for any useful info.
We are using Jenkins and docker for doing CI/CD. Our Jenkins is setup as master/slave style, where slaves are distributed across different data centers. when a new build needs to happen Jenkins master identifies a slave in one of the DC and spin up a ephemeral container and tear it down once done.
Due to firewall limitations, we only have about 10 ports open for the slaves in some of the DCs. for example Port Range: 8000 - 8010. In general docker uses the linux port ranges 32768 to 61000. The problem is Jenkins master can not talk to the containers if the host port is bound out of 8000 - 8010. Jenkins docker plugin has limitation where you can not bind multiple ports (may be I am wrong here). I would like to know if any way we can configure this at docker end or in Jenkins docker plugin.
After researching in many forums and talking to people, this is not possible or recommended even to try doing. The recommended implementation to overcome this issue is to move to Docker Swarm,
where you have only one virtual docker cloud
which takes care of spinning up containers behind the scenes and keep it ready for consumption even before the need arises. The configurations options are flexible.
Read more about Swarm here
https://docs.docker.com/swarm/