I am currently using Docker Engine 1.11, and I am investigating if its possible for me to move to Docker 1.12 and use Swarm. I am currently using Docker to run 50+ Bamboo agents, all of which need to have a port mapped to a port on the server. For instance, each docker container needs to have port 4000 available, so when I do Docker run, I do-
Docker run -p 10000:4000 myimg
Docker run -p 10001:4000 myimg
Docker run -p 10002:4000 myimg
Docker run -p 10003:4000 myimg
In Docker Swarm, from what I understand, I would run the following command to scale my service to 50 containers
docker service scale helloworld=5
But, if I did this, then they would all be trying to map to the same port. How can I accomplish this? Is it possible?
No, you can't.
It's just one key function that docker service provides that a single port can map to multi containers(service discovery)
And another one is when container fails, swarm will start a new one.(self healing)
I know nothing about Bamboo, so I can't tell you if there's a way to run bamboo service with the swarm mode.
Related
I am reading a docker run command where it maps /var/run/docker.sock
like:
docker run -it --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock theimage /bin/bash
Why would the container would need access to the socket? (this article says it is a very bad idea.)
What would be one case where the container need access to the socket?
It is not necessary until the container needs to invoke itself the docker daemon, for example, in order to create and run an inner container.
For example, in my CI chain Jenkins builds a docker image to run the build and test process. Inside it we need to create an image to test and then submit it to K8S. In such situation Jenkins, when builds the pipeline container, passes to it the docker socket to allow the container to create other containers using the host server docker daemon.
I have an app that launches a docker container and automates a few of the routines.
Now I have dockerized this app which is not able to talk to other containers over localhost. I tried setting
--network host
when launching the container and now not able to access the containerized webapp over localhost:.
Any pointers?
localhost won't work. Suppose, you are running a VM and try to talk to your host/ other VMs running in your machine. If you call localhost from one of the VMs, it's localhost for that VM only, not to your host. So, you won't be able to talk from one VM to another by calling localhost. Docker works same in regard to the localhost. You have two options,
Use a network
If you are using network, create a network and add all the containers to that network. This is the new suggested way by docker.
docker network create <your-network-name>
docker run --network <your-network-name> --name <container-name1> <image>
docker run --network <your-network-name> --name <container-name2> <image>
Then use the container name (container-name1) to talk to that service from other service (container-name2).
Use --link option
Or you could use --link option, which is a legacy system for docker. Docker docs says, unless you have a specific reason to use, don't use --link anymore.
docker run --name <container1> <image>
docker run --name <container2> <image>
You could use container1 to talk from container2 and vice versa. You could use these container name in places like DB host, etc.
did you try creating a common bridge network and attach your containers to the same network:
create network :-
docker network create networkname
and then in docker run command add this switch --network=networkname
I figured it later after going over a lot of other documents.
Step 1: install docker inside the container. Added following line to my dockerfile
RUN curl -sSL https://get.docker.com/ | sh
Step 2: provide volume-mapping in docker run command
-v /var/run/docker.sock:/var/run/docker.sock
Now hosts' docker commands are accessible from within my current container and without changing the --network for current docker container, I'm able to access other containers over localhost
I am very new to docker , just started venturing into this. I read online about this. I came to know of the following commands of docker which is: docker run and docker service. As I understood , with docker run we are spinning a new container. However I am not clear what docker service do? Does it spin container in a Swarm?
Can anyone help understand in simple to understand?
The docker run command creates and starts a container on the local docker host.
A docker "service" is one or more containers with the same configuration running under docker's swarm mode. It's similar to docker run in that you spin up a container. The difference is that you now have orchestration. That orchestration restarts your container if it stops, finds the appropriate node to run the container on based on your constraints, scale your service up or down, allows you to use the mesh networking and a VIP to discover your service, and perform rolling updates to minimize the risk of an outage during a change to your running application.
Docker Run vs Docker service
docker run:
we can create number of containers with different images.
docker service:
we can create number of containers with same image in a single command line.
SYNTAX:
docker service create --name service-name --network network-name --replicas number-of-containers image-name
EXAMPLE:
docker service create --name service1 --network swarm-net --replicas 5 redis
I have a setup which runs my Docker container like this.
run-docker.sh
docker build -t wordpress-gcloud
container=$(docker run -d wordpress-gcloud)
ipOfContainer=$(docker inspect "$container" | jq -r '.[0].NetworkSettings.IPAddress')
But now I have setup a Docker Swarm (1 manager + 2 workers).
How should I convert the above bash script to run the container on the swarm?
Typically, you can access your Swarm cluster via Swarm APIs, which is similar with Docker API. To access Swarm APIs, you can use -H parameter with docker commands. For example, if you have a swarm manager running on your local machine, and the port number is 3376, then you can get your swarm cluster info with:
docker -H 127.0.0.1:3376 info
You can also inspect the swarm cluster containers by:
docker -H 127.0.0.1:3376 inspect <container ID>
More details about communciate with Swarm cluster can be found here: https://docs.docker.com/swarm/install-manual/#/step-6-communicate-with-the-swarm
But in your case, I think that docker build command could be a problem. In my understanding, Swarm will find a random node from your cluster to execute this docker build process, so if the Dockerfile is not existing on the node where docker build has been executed, you will get error. My idea is to consider to build your image in a certain place, and push the image to a image registry, then pull and run the image in any place you want.
This is a two-part question.
First part:
What is the best approach to run Consul and a Tomcat in the same docker container?
I've built my own image, installing both Tomcat and Consul correctly, but I am not sure on how to start them. I tried putting both calls as CMD in the Dockerfile, but no success. I tried to put Consul as an ENTRYPOINT (Dockerfile) and Tomcat to be called in the "docker run" command. It could be vice versa but I have a feeling that it is no good way either.
The docker will run in one AWS instance. Each docker container would run Consul as a server, to register themselves in another AWS instance. Consul and Consul-template will be integrated into proper load balance. This way, my HAproxy instance will be able to correctly forward the requests as I plug or unplug containers.
Second part:
In individual tests I did, the docker container was able to reach my main Consul server(leader) but it failed to register itself as an "alive" node.
Reading the logs at Consul server, I think is a matter of which ports I am exposing and publishing. In AWS, I already allowed communication in all ports in TCP and UDP between the instances in this particular Security Group.
Do you know which ports I should be exposing and publishing to allow proper communication between a standalone consul(aws instance) and consul servers (running inside docker containers inside a aws container)? What is command to run the docker container: docker run -p 8300:8300 .........
Thank you.
I would use ENTRYPOINT to kick off a script on docker run.
Something like
ENTRYPOINT myamazingbashscript.sh
Syntax might be off but u get the idea
The script should start both services and finally should tail -f the tomcat logs (or any logs).
tail -f will prevent container exit since the tail -f command never exits and it will also help u to see what tomcat is doing
Do ... docker logs -f to watch the logs after a docker run
Note because the container doesn't exit u can exec into it with ... docker exec -it containerName bash
This lets you have a sniff around inside the container
Not generally the best approach to have two services in one container because it destroys the separation of concerns and reusability but u may have valid reasons
To build use docker build then run with docker run as u stated it.
If u decided to go for a 2 container solution then u will need to expose ports between containers to allow them to talk to each other. You could share files between containers using volumes_from