I'd like to install MarkLogic in docker and form a cluster i.e. two or more ML nodes instance running on the same machine. How to achieve that ?
In the Building a MarkLogic Docker Container blog entry, it describes how to create and initialize a Docker image running MarkLogic.
Near the bottom of the article, it describes how to link multiple containers using the --link switch and docker-compose to assist in managing a cluster of Docker containers:
Linking Containers
You are the one who tells Docker how containers should communicate! When using the docker run command, you can also pass in a --link flag.
Consider the following examples:
docker run -d --name=marklogic1 --hostname=marklogic1.local -p 8000-8002:8000-8002 marklogic:8.05-preinitialized
docker run -d --name=marklogic2 --hostname=marklogic2.local --link marklogic1:marklogic1 -p 18000-18002:8000-8002 marklogic:8.05-preinitialized
The above creates two MarkLogic containers. The second has the --link flag. Docker networking sets environment variables and the /etc/hosts file inside each container being linked along and also the linking container. This sets up the ability for Docker containers to communicate over the internal Docker network. The --hostname flag is used to be consistent with MarkLogic, which uses the full domain name when contacting other MarkLogic servers in the cluster. So we simply add the .local domain to the name of the container.
Finally, note the -p flag on the second container exposes the MarkLogic’s ports in the range of 8000 to 8002 to the host computer’s ports of 18000 to 18002. Why not use the host computer’s ports of 8000 to 8002? Because the first container is already using them. Remember, Docker shares networking with the host computer! But of course, you can choose any range of open ports on your host computer to map the container’s MarkLogic ports.
Now, simply point your browser to port 8001 in the first container (marklogic1) and go through the post-installation steps. Skip joining a cluster. When finished, point your browser to port 18001 for the second container (marklogic2) and go through the post-installation steps. When asked to join a cluster, simply use the host name of localhost and leave the port number at 8001. MarkLogic in the second container will contact MarkLogic in the first container. The configuration will be updated such that the marklogic2 joins the cluster with marklogic1. Create and add a third MarkLogic container, also linking it to marklogic1:marklogic1 and marklogic2:marklogic2 and you’ll soon have a proper 3-node MarkLogic cluster!
Using docker-compose
Docker has created another tool to aid in managing clusters of Docker containers. docker-compose has commands to create multiple containers and network them together. You can then create them, start them and stop them using docker-compose commands. Docker uses a file called Dockerfile to build containers. docker-compose uses a file called docker-compose.yml to build networks of containers.
docker-compose is available as a separate download.
Related
I have a basic question about Docker that is probably due to lack of knowledge on my part about networking. The Docker container networking documentation states:
By default, when you create a container, it does not publish any of its ports to the outside world. To make a port available to services outside of Docker, or to Docker containers which are not connected to the container’s network, use the --publish or -p flag. This creates a firewall rule which maps a container port to a port on the Docker host.
It sounds like, when you install a container on your computer without mapping any ports from the container to the host machine, the container should not be able to access the internet. However, for example, I install the Ubuntu container with:
docker pull ubuntu
Then I enter the container's command line with:
docker run -ti ubuntu bash
At that point, I can run apt-get update and the container starts pulling information from the internet without mapping any ports (e.g. -p 80:80). How is this possible?
Publishing a port allows machines external to the docker host to access the container, inbound connectivity. By default, containers can access the network with outbound connectivity.
To restrict a container from accessing the network, you can either run the container with no network (note: this still creates a loopback interface, and you can later connect it to another network):
docker run --net none ...
Or you can create a network with the --internal option and run containers on that network:
docker network create --internal internal
docker run --net internal ...
The internal network is created without a gateway interface on the bridge network.
When they talk about publishing ports, they mean inbound ports.
Outbound ports work - depending on your network type - see here for more:
https://docs.docker.com/network/
I have installed docker on a private cloud VM (RHEL 7.2) with a floating IP say 10.135.118.6
I also have a Java Play Application which talks to third party database servers. The database have white-listed the floating IP 10.135.118.6 so that my Java Play App can make a connection to it.
Now I wish to dockerize this Java Play App, but while doing so, the IP addresses which get assigned to the docker containers are mapped using a default docker bridge whose IPs eventually turn out to be of the range 172.17.0.2 (Dynamic IP)
This is creating a problem for me as my new IP is not white-listed on my Database server which eventually stops the container.
Is there any way I can assign the VM floating IP to my docker
container instead of the docker bridge network IP?
To achieve this:
First, you can create your own docker network with custom subnet(e.g JavaPlay_net)
docker network create --subnet=172.32.0.0/16 JavaPlay_net
than simply run the image (for example ubuntu image)
docker run --net JavaPlay_net --ip 172.32.0.22 -it ubuntu bash
then in ubuntu shell
hostname -i
Additionally you could use
--hostname to specify a hostname
--add-host to add more entries to /etc/hosts
Reference to create Docker Network:
https://docs.docker.com/engine/reference/commandline/network_create/#options
I create a swarm and join a node, very nice all works fine
docker swarm init --advertise-addr 192.168.99.1
docker swarm join --token verylonggeneratedtoken 192.168.99.1:2377
I create 3 services on the swarm manager
docker service create --replicas 1 --name nginx nginx --publish published=80,target=80
docker service create --replicas 1 --name php php:7.1-fpm published=9000,target=9000
docker service create --replicas 1 --name postgres postgres:9.5 published=5432,target=5432
All services boots up just fine, but if I customize the php image with my app, and configure nginx to listen to the php fpm socket I can’t find a way to communicate these three services. Even if I access the services using “docker exec -it service-id bash” and try to ping the container names or host names (I even tried to curl them).
What I am trying to say is I don’t know how to configure nginx to connect to fpm since I don’t know how one container communicates to another using swarm. Using docker-compose or docker run is simple as using a links option. I’ve read all documentation around, spent hours on trial and error, and I just couldn’t wrap my head around this. I have read about the routing mesh, wish will get the ports published and it really does to the outside world, but I couldn’t figure in wish ip its published for the internal containers, also that can't be an random ip as that will cause problems to manage my apps configuration, even the nginx configurations.
To have multiple containers communicate with each other, they next to be running on a user created network. With swarm mode, you want to use an overlay network so containers can run on multiple hosts.
docker network create -d overlay mynet
Then run the services with that network:
docker service create --network mynet ...
The easier solution is to use a compose.yml file to define each of the services. By default, the services in a stack are deployed on their own network:
docker stack deploy -c compose.yml stack-name
Or you can just make 1 Docker-compose, and make a docker stack with them.
It's easier and more reliable to combine php_fpm and nginx in the same image. I know this goes against the official way of single-app images, but for cases like php_fpm+nginx where you must have both to return a request, it's the best case. I have a WIP sample here: https://github.com/BretFisher/php-docker-good-defaults
I have created Docker containers using docker-compose.yml on a single host.
Can anybody tell if docker-compose.yml file can be used to start Docker containers on multiple VMs ? If yes, how?
The compose file cannot be used with the new Docker "Swarm Mode" introduced in June (Docker 1.12). The "legacy" Docker Swarm accepts compose files but you should really focus on learning Docker "Swarm Mode", not the old Docker Swarm. It's much simpler too, except for the missing support for compose files.
"swarm mode" accepts dab files and there is a way to convert compose files to dab, but it's experimental (which means that a lot of what you have put in your compose file won't translate). So the current best way is to create bash scripts with the CLI commands eg: docker service create --name nginx nginx:1.10-alpine.
and do have look at #Matt 's link about learning the basics of Docker swarm mode. http://docs.docker.com/engine/swarm/key-concepts
You can quickly spin up a stack swarm from the external containers and a node list (by ip like 10.0.0.1 or hostname like nodeb):
docker run -d -P --restart=always --name swarm-manager swarm manager \
"nodes://10.0.0.1:2376,nodeb:2376,nodec:2376"
export DOCKER_HOST=$(docker port swarm-manager 2375)
docker-compose up
Before running this, you'd need to configure the engines to listen on 2376 with TLS configured, a client key/certificate, and the appropriate network access. See docker's documentation on TLS for more details on configuring this.
As per docker link docs I can only --link to one (already running) container to access internal ports of that container.
How can I link one container to 2 or more other containers? (MongoDB and another web service in my case.)
(Right now I am exposing ports of second container to host and then accessing via host:port, also possible workaround might be Let two Containers getting linked to eachother .)
docker run -d --link node1:node1 --link node2:node2 --link node3:node3 -p hostport:containerport your-image
I run the command above and it works.
Alternatively, you can turn on inter-container communication by adding --icc=true to the docker daemon's command-line, and you won't have to link the containers, just access them using the Docker Host's IP address and the containers' published ports.
Docker Networking
For an easy solution you could use Docker-compose. in you compose file (docker-compose.yml) use the option links
Link to containers in another service. Either specify both the service name and a link alias (SERVICE:ALIAS), or just the service name.
container_name:
links:
- node1
- node2
- node3:alias3
- noden