Docker - Container is not created after create service - docker

I want to create a service from rabbitmq image so I try to run the following command:
docker service create --name rabbitmq --hostname rabbitmq --publish 5672:5672 --publish 15672:15672 --mount source=rabbitmq,target=/var/lib/rabbitmq rabbitmq:3.6.10-management
Then I run the command docker service ls to see if the service is created and everything looks ok, but when I run docker ps the container is not created.
The weird thing is that docker service ls looks like this:
ID NAME MODE REPLICAS IMAGE PORTS
ye8r8xk2k49c rabbitmq replicated 1/1 rabbitmq:3.6.10-management *:5672->5672/tcp,*:15672->15672/tcp
Can someone help me with this issue?,
Thanks in advance.

You should use
docker service ps rabbitmq
This will tell you which node is running the container for your service.
docker ps will only show the container for the current node and not your swarm cluster

Related

Docker swarm - Port not accessible

I am trying out some things with docker and docker swarm and currently I am running into a problem.
If I create a container with:
docker run -d --name my_nginx -p 8080:80 nginx
everythings went fine, I am able to access this port.
If I try to create a service with docker swarm (container was removed before) I am not able to open that port:
docker service create -d --name my_service_nginx --replicas=1 -p 8080:80 nginx
It seems that the service does not create a portmapping.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d3417b80036c nginx:latest "/docker-entrypoint.…" 6 minutes ago Up 6 minutes 80/tcp my_service.1.1l3fwcct1m9hoallkn0g9qwpd
Do you have any idea what I am doing wrong?
Best regards
Jan
Launching a Docker swarm on the LXC is not possible:
Docker swarm get access from outside network

Why docker container is not able to access other container?

I have 3 docker applications(containers) in which one container is communicating with other 2 containers. If I run that containers using below command, container 3 is able to access the container 1 and container 2.
docker run -d --network="host" --env-file container1.txt -p 8001:8080 img1:latest
docker run -d --network="host" --env-file container2.txt -p 8080:8080 img2:latest
docker run -d --network="host" --env-file container3.txt -p 8000:8080 img3:latest
But this is working only with host network if I remove this --network="host" option then I am not able to access this application outside(on web browser). In order to access it outside i need to make the host port and container ports same as below.
docker run -d --env-file container1.txt -p 8001:8001 img1:latest
docker run -d --env-file container2.txt -p 8080:8080 img2:latest
docker run -d --env-file container3.txt -p 8000:8000 img3:latest
With this above commands i am able to access my application on web browser but container 3 is not able to communicate with container 1. here container 3 can access the container 2 because there i am exposing 8080 host + container port. But i can't expose again 8080 host port for container 3.
How to resolve this issue??
At last my goal is this application should be accessible on browser without using host network, it should use the bridge network . And container 3 needs to communicate with container 1 & 2.
On user-defined networks, containers can not only communicate by IP address but can also resolve a container name to an IP address. This capability is called automatic service discovery.
Read this for more details on Docker container networking.
You can perform the following steps to achieve the desired result.
Create a private bridge network.
docker network create --driver bridge privet-net
Now start your application containers along with the --network private-net added to your docker run command.
docker run -d --env-file container1.txt -p 8001:8001 --network private-net img1:latest
docker run -d --env-file container2.txt -p 8080:8080 --network private-net img2:latest
docker run -d --env-file container3.txt -p 8000:8000 --network private-net img3:latest
With this way, all the three containers will be able to communicate with each other and also to the internet.
In this case when you are using --network=host, then you are telling docker to not isolate the network rather to use the host's network. So all the containers are on the same network, hence can communicate with each other without any issues. However when you remove --newtork=host, then docker will isolate the network as well there by restricting container 3 to communicate with container 1.
You will need some sort of orchestration service like docker compose, docker swarm etc.

How to get TaskManager containers registered with Job Manager container for flink?

Trying to play with flink docker image for the first time. I am following instructions at https://hub.docker.com/_/flink which says
You can run a JobManager (master).
$ docker run --name flink_jobmanager -d -t flink jobmanager
You can also run a TaskManager (worker). Notice that workers need to register
with the JobManager directly or via ZooKeeper so the master starts to
send them tasks to execute.
$ docker run --name flink_taskmanager -d -t flink taskmanager
Can someone explain how does this taskmanager register with the jobmanager from these commands?
Thanks
In order to start a Flink cluster on Docker I would strongly recommend to use docker-compose whose config file you can also find here.
If you want to set up a Flink cluster using docker manually, then you have to start the containers so that they can resolve their names. First you need to create a custom network via
docker network create my-network
Next, you have to start the jobmanager with this network and configure the name as well as the hostname to be same. That way Flink will bind to the hostname which is resolvable.
docker run --name jobmanager --hostname jobmanager --rm --net my-network -d -t flink jobmanager
Last but not least, we need to start the taskmanager and tell him the name of the JobManager. This is done by setting the environment variable JOB_MANAGER_RPC_ADDRESS to `jobmanager.
docker run --name taskmanager --net my-network -e JOB_MANAGER_RPC_ADDRESS=jobmanager -t -d flink taskmanager

How to create websocket connection between two Docker containers

I've got two Docker containers that need to have a websocket connection between the two.
I run one container like this:
docker run --name comm -p 8080:8080 comm_module:latest
to expose port 8080 to the host. Then I try to run the second container like this:
docker run --name test -p 8080:8080 datalogger:latest
However, I get the error below:
docker: Error response from daemon: driver failed programming external
connectivity on endpoint test
(f06588ee059e2c4be981e3676d7e05b374b42a8491f9f45be27da55248189556):
Bind for 0.0.0.0:8080 failed: port is already allocated. ERRO[0000]
error waiting for container: context canceled
I'm not sure what to do. Should I connect these to a network? How do I run these containers?
you can't bind the same host port twice in the same time you may change one of the ports on one container:
docker run --name comm -p 8080:8080 comm_module:latest
docker run --name test -p 8081:8080 datalogger:latest
you may check the configuration in the containers on how they communicate .
you can also create link between them:
docker run --name test -p 8081:8080 --link comm datalogger:latest
I finally worked it out. These are the steps involved for a two-way websocket communication between two Docker containers:
Modify the source code in the containers to use the name of the other container as the destination host address + port number (e.g. comm:port_no inside test, and vice versa).
Expose the same port (8080) in the Dockerfiles of the two containers and build the images. No need to publish them as they are will be visible to other containers on the network.
Create a user-defined bridge network like this:
docker network create my-net
Create my first container and attach it to the network:
docker create --name comm --network my-net comm_module:latest
Create my second container and attach it to the network:
docker create --name test --network my-net datalogger:latest
Start both containers by issuing the docker start command.
And the two-way websocket communication works nicely!
My Solution works fine.
docker network create mynet
docker run -p 443:443 --net=mynet --ip=172.18.0.3 --hostname=frontend.foobar.com foobarfrontend
docker run -p 9999:9999 --net=mynet --ip=172.18.0.2 --hostname=backend.foobar.com foobarbackend
route /P add 172.18.0.0 MASK 255.255.0.0 10.0.75.2
the foobarfrontend calls a wss websocket on foobarbackend on port 9999
PS: i work on docker windows 10 with linuxcontainers
have fun

Is it possible to run ELK as a Docker service?

I am new to Docker world and trying to run ElasticSearch stack on Docker. I am able to start the ELK as an Container and it works perfectly.
docker run -v /var/lib/docker/volumes/elk-data:/var/lib/elasticsearch \
-v /var/lib/docker/volumes/elk-data:/var/log/elasticsearch \
-p 5601:5601 -p 9200:9200 -p 5044:5044 \
--name elk sebp/elk
I am using journalbeat to forward the metrics to ElasticSearch service and do visualization in Kibana.
I was able to run journalbeat as a service using the following command:
sudo docker service create --replicas 2 --mount type=bind,source=/opt/apps/shared/dev/docker/volumes/journalbeat/config/journalbeat.yml,target=/journalbeat.yml --mount type=bind,source=/run/log/journal,target=/run/log/journal --mount type=bind,source=/etc/machine-id,target=/etc/machine-id --constraint node.labels.nodename==devlabel --name journalbeat-svc mheese/journalbeat:v5.5.2
Is there a way can we run ELK as a service? so that we can start 2 containers - 1 one on Master Swarm and other on Worker Node.
An example of running the full ELK stack as separate docker containers is available here: https://github.com/elastic/examples/tree/master/Miscellaneous/docker/full_stack_example
This uses docker-compose so you can easily bring the containers up and down.
ELK means Elasticsearch, Logstash, and Kibana, so there are 3 services that must be running. In Docker swarm a service has zero or more instances, but every instance is a container that is based on the same Dockerfile.
So, in order to run ELK as a service you would have to start Elasticsearch, Logstash, and Kibana in the same container. Although theoretically it is possible, this is not recommended (there should be one process per container).
Instead, you should create 3 services, one for Elasticsearch, Logstash, and Kibana.

Resources