I'm trying to use NSQ in Docker Swarm without success
mhlg/rpi-nsq is a Docker image built for the Raspberry Pi ARM7 board and I can confirm is working correctly if run as a normal Docker container
Running NSQ in Docker (OK)
# crete a bridged network
$ docker network create nsq_network
# run lookupd
$ docker run --name nsqlookupd --network nsq_network -p 4160:4160 -p 4161:4161 mhlg/rpi-nsq nsqlookupd
# run nsqd
$ docker run --name nsqd --network nsq_network -p 4150:4150 -p 4151:4151 mhlg/rpi-nsq nsqd --broadcast-address=nsqd --lookupd-tcp-address=nsqlookupd:4160
# run nsqadmin
$ docker run --name nsqadmin --network nsq_network -p 4171:4171 mhlg/rpi-nsq nsqadmin --lookupd-http-address=nsqlookupd:4161
Running NSQ in Docker Swarm mode (FAIL)
this is what I'm doing in the swarm manager
# crete an overlay network
$ docker network create nsq_network
# run nsqlookupd
$ docker service create --replicas 1 --name nsqlookupd --network nsq_network -p 4160:4160 -p 4161:4161 mhlg/rpi-nsq nsqlookupd
# run nsqd
$ docker service create --replicas 1 --name nsqd --network nsq_network -p 4150:4150 -p 4151:4151 mhlg/rpi-nsq nsqd --lookupd-tcp-address=nsqlookupd:4160 --broadcast-address=nsqd
# run nsqadmin
$ docker service create --replicas 1 --name nsqadmin --network nsq_network -p 4171:4171 mhlg/rpi-nsq nsqadmin --lookupd-http-address=nsqlookupd:4161
If I attach to the nsqd service I can see it is not able to connect to nsqlookupd service.
[nsqd] 2016/12/09 16:51:56.851953 LOOKUPD(nsqlookupd:4160): sending heartbeat
[nsqd] 2016/12/09 16:51:56.852049 LOOKUP connecting to nsqlookupd:4160
[nsqd] 2016/12/09 16:51:57.852457 LOOKUPD(nsqlookupd:4160): ERROR PING - dial tcp: i/o timeout
It looks like the overlay network create some issues (multicast?) but I can not figure how I can solve it especially on an ARM device.
I tried to ssh into the Docker Host running the nsqd service and exec some dns commands from inside the nsqd container
# resolve google.com (OK)
root#3206d1c3cd3d:/# nslookup google.com
Server: 127.0.0.11
Address: 127.0.0.11#53
Non-authoritative answer:
Name: google.com
Address: 216.58.214.78
# resolve nsqd service (OK) - can resolve the container I'm executing the command from
root#e1f6430acd1c:/# nslookup nsqd
Server: 127.0.0.11
Address: 127.0.0.11#53
Non-authoritative answer:
Name: nsqd
Address: 10.0.0.2
# resolve nsqlookupd service (FAIL)
root#e1f6430acd1c:/# nslookup nsqlookupd
;; connection timed out; no servers could be reached
Ran into the same exact issue in docker swarm. This is how I resolved it:
docker service create \
--mode global \
--name swarm-master-nsq_nsqlookupd \
--constraint node.role==manager \
--hostname nsqlookupd \
--network name=swarm-master-nsq_nsq,alias=nsqlookupd \
nsqio/nsq:latest /nsqlookupd
docker service create \
--replicas 3 \
--name swarm-master-nsq_nsqd \
--constraint node.role==manager \
--hostname nsqd \
--network name=swarm-master-nsq_nsq,alias=nsqd \
nsqio/nsq:latest sh -c '/nsqd --broadcast-address=$(hostname -i) --lookupd-tcp-address=nsqlookupd:4160'
docker service create \
--replicas 1 \
--publish 4171:4171 \
--name swarm-master-nsq_nsqadmin \
--constraint node.role==manager \
--hostname nsqadmin \
--network name=swarm-master-nsq_nsq,alias=nsqadmin \
nsqio/nsq:latest /nsqadmin --lookupd-http-address=nsqlookupd:4161
As far as I can tell, there are a couple issues in your example:
You are not aliasing nsqlookupd, and other services
the broadcast of nsqd is incorrect (assuming you want to at some point increase the number of nsqd nodes)
You indicate:
# crete an overlay network
$ docker network create nsq_network
However, that does not create an overlay network but rather a bridge network.
Consider running:
docker network create --driver overlay nsq_network
Related
I have created three networks A, B and C using docker on a ubuntu VM, each network contained the three containers 2 busybox and 1 nginx. Each nginx container in different network i have port-forwarded on 80 81 and 82 respectively using below commands:
sudo docker run -itd --rm -p 82:82 --network C --name web3 nginx
sudo docker run -itd --rm -p 81:81 --network B --name web2 nginx
sudo docker run -itd --rm -p 80:80 --network A --name web1 nginx
but when i tried to access the container from my host machine providing the ip address of my vm along with port e.g. https://192.168.18.240:82 it does not give access to that container in different network. While giving the only IP address with port 80 am able to access the nginx but not on port 82 and 81. I have cleared the cache and clear the browsing history but all in vain.
All of the docker nginx containers listen on port 80. You are mapping B and C to the wrong container port.
sudo docker run -itd --rm -p 82:**80** --network C --name web3 nginx
sudo docker run -itd --rm -p 81:**80** --network B --name web2 nginx
sudo docker run -itd --rm -p 80:80 --network A --name web1 nginx
My Environment:
Windows 10 Home
WSL2
Cassandra 4.0.1: Official Docker Image
Docker command:
docker run --name cassandra-node-0 -p 7000:7000 -p 7001:7001 -p 7199:7199 -p 9042:9042 -p 9160:9160 -e CASSANDRA_CLUSTER_NAME=MyCluster -e CASSANDRA_ENDPOINT_SNITCH=GossipingPropertyFileSnitch -e CASSANDRA_DC=datacenter1 -e CASSANDRA_BROADCAST_ADDRESS=192.168.1.101 -d cassandra
CQLSH Command:
docker run -it -e CQLSH_HOST=$(docker inspect --format='{{ .NetworkSettings.IPAddress}}' cassandra-node-0) --name cassandra-client --entrypoint=cqlsh cassandra
I try to connect cassandra node using cqlsh where ubuntu in WSL2 in same pc.
I did not change all *.yaml file and only use Docker Env.
When I insert node's docker network ip to CQLSH_HOST, cqlsh is successfully connected node.
But, When I insert my private ip, public ip or 127.0.0.1, cqlsh is refused connection to node.
This shows the same issue when nodes from different networks connect.
I think I'm missing a setting of something Docker Env.
What settings am I missing?
[Update] I add some port fowarding rules in firewall but same issue.
[Update 2] docker ps -a result:
0.0.0.0:7000-7001->7000-7001/tcp, :::7000-7001->7000-7001/tcp, 0.0.0.0:7199->7199/tcp, :::7199->7199/tcp, 0.0.0.0:9042->9042/tcp, :::9042->9042/tcp, 0.0.0.0:9160->9160/tcp, :::9160->9160/tcp
Try adding --hostname and --network when you run Cassandra. For example:
$ docker run --rm -d
--name cassandra-node-0
--hostname cassandra-node-0
--network cassandra-node-0
You'll find that it's easier to connect via cqlsh by adding:
--network cassandra-node-0
-e CQLSH_HOST=cassandra-node-0
to your docker run command. Cheers!
I installed Docker Local Registry as below
docker pull registry
after
docker run -d -p 5001:5001 -v C:/localhub/registry:/var/lib/registry --restart=always --name hub.local registry
because of 5000 port using another application.
but i can't reach to
http://localhost:5001/v2/_catalog
The first part of the -p value is the host port and the second part is the port within the container.
This code runs the registry on port 5001
docker run -d -p 5001:5000 --name hub.local registry
If you want to change the port the registry listens on within the container, you must use this code
docker run -d -e REGISTRY_HTTP_ADDR=0.0.0.0:5001 -p 5001:5001 --name hub.local registry
'''
docker run -d -p 5001:5000 -v C:/localhub/registry:/var/lib/registry --restart=always --name hub.local registry
'''
Keep the internal port the same and change only your local port
I have this right now:
docker network rm cprev || echo;
docker network create cprev || echo;
docker run --rm -d -p '3046:3046' \
--net=cprev --name 'cprev-server' cprev-server
docker run --rm -d -p '3046:3046' \
-e cprev_user_uuid=111 --net=cprev --name 'cprev-agent-1' cprev-agent
docker run --rm -d -p '3046:3046' \
-e cprev_user_uuid=222 --net=cprev --name 'cprev-agent-2' cprev-agent
basically the 2 cprev-agents are supposed to connect to the cprev-server using TCP. The problem is I am getting this error:
docker: Error response from daemon: driver failed programming external
connectivity on endpoint cprev-agent-1
(6e65bccf74852f1208b32f627dd0c05b3b6f9e5e7f5611adfb04504ca85a2c11):
Bind for 0.0.0.0:3046 failed: port is already allocated.
I am sure it's a simple fix but frankly I don't know how to allow two way traffic from the two agent containers without using the same port etc.
So this worked (using --network=host) but I am wondering how I can create a custom network that doesn't interfere with the host network??
docker network create cprev; # unused now
docker run --rm -d -e cprev_host='0.0.0.0' \
--network=host --name 'cprev-server' "cprev-server:$curr_uuid"
docker run --rm -d -e cprev_host='0.0.0.0' \
-e cprev_user_uuid=111 --network=host --name 'cprev-agent-1' "cprev-agent:$curr_uuid"
docker run --rm -d -e cprev_host='0.0.0.0' \
-e cprev_user_uuid=222 --network=host --name 'cprev-agent-2' "cprev-agent:$curr_uuid"
so is there anyway to get this to work using my custom docker network "cprev"?
I am using seneca.js as microservices framework. Evrything works fine in local but when I deply services to swarm I have problem with messages ping service is somhow blocked, here is error from my api gateway when run route:
{"statusCode":502,"error":"Bad Gateway","message":"connect
ECONNREFUSED 10.100.0.5:55010"}
Here is my setup:
docker network create -d overlay --subnet 10.100.0.0/16 test-net
docker service create --network test-net -p 8400:8400 -p 8500:8500 -p 8600:53/udp --name node1 progrium/consul -server -bootstrap -ui-dir /ui
docker service create --network test-net --name bases -e HOST=bases -e REGISTRY=node1 vforv/bases:v2
docker service create --network test-net -p 5000:5000 --name api -e BASES=bases -e HOST=api -e REGISTRY=node1 vforv/api-gateway:v2
docker service create --network test-net --name ping -e HOST=ping -e REGISTRY=node1 vforv/ping-service:v3
here is code:
https://github.com/vforv/hapi-seneca-ts
Anyone know what is problem?