Cannot start and register the chaincode - hyperledger

I'm following the hyperleger fabric chaincode setup instructions: http://hyperledger-fabric.readthedocs.io/en/latest/Setup/Chaincode-setup
I'm using docker toolbox and a peer is running in one terminal (docker-compose up).
In another docker terminal I try to start and register the chaincode:
CORE_CHAINCODE_ID_NAME=mycc CORE_PEER_ADDRESS=0.0.0.0:7051 ./chaincode_example02
I get this error:
Thanks in advance!
Update: I'm using the docker-compose.yml from the docs:
membersrvc:
image: hyperledger/fabric-membersrvc
command: membersrvc
vp0:
image: hyperledger/fabric-peer
environment:
- CORE_PEER_ADDRESSAUTODETECT=true
- CORE_VM_ENDPOINT=http://172.17.0.1:2375
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_ID=vp0
- CORE_PEER_PKI_ECA_PADDR=membersrvc:7054
- CORE_PEER_PKI_TCA_PADDR=membersrvc:7054
- CORE_PEER_PKI_TLSCA_PADDR=membersrvc:7054
- CORE_SECURITY_ENABLED=true
- CORE_SECURITY_ENROLLID=test_vp0
- CORE_SECURITY_ENROLLSECRET=MwYpmSRjupbT
links:
- membersrvc
command: sh -c "sleep 5; peer node start --peer-chaincodedev"
docker ps gives:
$ docker ps
CONTAINER ID IMAGE COMMAND CRE
ATED STATUS PORTS NAMES
35050760e1df hyperledger/fabric-peer "sh -c 'sleep 5; peer" 21
minutes ago Up 2 minutes option3_vp0_1
209132c7f059 hyperledger/fabric-membersrvc "membersrvc" 21
minutes ago Up 2 minutes option3_membersrvc_1
and docker-machine ls gives:
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DO
CKER ERRORS
default * virtualbox Running tcp://192.168.99.109:2376 v1
.12.3
So I tried also to start and register the chaincode with:
CORE_CHAINCODE_ID_NAME=mycc CORE_PEER_ADDRESS=192.168.99.109:7051 ./chaincode_example02

It seems like your peer is not reachable at 0.0.0.0.7051 . To check if you have a peer listening on 7051 , use the command:
netstat -lnptu| grep 7051
Try setting CORE_PEER_ADDRESS to either the public or the private IP of the host instead of 0.0.0.0
Also verify that you have forwarded the port 7051 from the docker container to the host.

Related

Apache Guacamole in Docker containers: Creation of WebSocket tunnel to guacd failed

I installed Apache Guacamole using Docker on a CentOS 8.1 with Docker 19.03.
I followed the steps described here:
https://guacamole.apache.org/doc/gug/guacamole-docker.html
https://www.linode.com/docs/applications/remote-desktop/remote-desktop-using-apache-guacamole-on-docker/
I started the containers like this:
# mysql container
docker run --name guacamole-mysql -e MYSQL_RANDOM_ROOT_PASSWORD=yes -e MYSQL_ONETIME_PASSWORD=yes -d mysql/mysql-server
# guacd container
docker run --name guacamole-guacd -e GUACD_LOG_LEVEL=debug -d guacamole/guacd
# guacamole container
docker run --name guacamole-guacamole --link guacamole-guacd:guacd --link guacamole-mysql:mysql -e MYSQL_DATABASE=guacamole -e MYSQL_USER=guacamole -e MYSQL_PASSWORD=password -d -p 8080:8080 guacamole/guacamole
All went fine and I was able to access the Guacamole web interface on port 8080. I configured one VNC connection to another machine on port 5900. Unfortunately when I try to use that connection I get the following error in the web interface:
"An internal error has occurred within the Guacamole server, and the connection has been terminated..."
I had a look on the logs too and in the guacamole log I found this:
docker logs --tail all -f guacamole-guacamole
...
15:54:06.262 [http-nio-8080-exec-2] ERROR o.a.g.w.GuacamoleWebSocketTunnelEndpoint - Creation of WebSocket tunnel to guacd failed: End of stream while waiting for "args".
15:54:06.685 [http-nio-8080-exec-8] ERROR o.a.g.s.GuacamoleHTTPTunnelServlet - HTTP tunnel request failed: End of stream while waiting for "args".
I'm sure that the target machine (which is running the VNC server) is fine. I'm able to connect to it from both a VNC client and another older Guacamole which I installed previously (not using Docker).
My containers look ok too:
docker container ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ad62aaca5627 guacamole/guacamole "/opt/guacamole/bin/…" About an hour ago Up About an hour 0.0.0.0:8080->8080/tcp guacamole-guacamole
a46bd76234ea guacamole/guacd "/bin/sh -c '/usr/lo…" About an hour ago Up About an hour 4822/tcp guacamole-guacd
ed3a590b19d3 mysql/mysql-server "/entrypoint.sh mysq…" 2 hours ago Up 2 hours (healthy) 3306/tcp, 33060/tcp guacamole-mysql
I connected to the guacamole-guacamole container and pinged the other two containers: guacamole-mysql and guacamole-guacd. Both look fine and reachable.
docker exec -it guacamole-guacamole bash
root#ad62aaca5627:/opt/guacamole# ping guacd
PING guacd (172.17.0.2) 56(84) bytes of data.
64 bytes from guacd (172.17.0.2): icmp_seq=1 ttl=64 time=0.191 ms
64 bytes from guacd (172.17.0.2): icmp_seq=2 ttl=64 time=0.091 ms
root#ad62aaca5627:/opt/guacamole# ping mysql
PING mysql (172.17.0.3) 56(84) bytes of data.
64 bytes from mysql (172.17.0.3): icmp_seq=1 ttl=64 time=0.143 ms
64 bytes from mysql (172.17.0.3): icmp_seq=2 ttl=64 time=0.102 ms
Looks like there is a communication issue between the guacamole itself and guacd. And this is where I'm completely stuck.
EDIT
I tried on CentOS 7 and I got the same issues.
I also tried this solution https://github.com/boschkundendienst/guacamole-docker-compose as suggested by #BatchenRegev but I got the same issue again.
I've been experiencing the same issues under centos.
My only difference is that I'm hosting the database on a separate machine as this is all cloud-hosted and I want to be able to destroy/rebuild the guacamole server at will.
I ended creating a docker-compose.yml file as that seemed to work better.
Other gotcha's I came across:
make sure the guacd_hostname is the actual machine hostname and not 127.0.0.1
setting Selinux to allow httpd.
sudo setsebool -P httpd_can_network_connect
My docker-compose.yml is shown below replace all {variables} with your own and update the file if you are using a sql image as well.
version: "2"
services:
guacd:
image: "guacamole/guacd"
container_name: guacd
hostname: guacd
restart: always
volumes:
- "/data/shared/guacamole/guacd/data:/data"
- "/data/shared/guacamole/guacd/conf:/conf:ro"
expose:
- "4822"
ports:
- "4822:4822"
network_mode: bridge
guacamole:
image: "guacamole/guacamole"
container_name: guacamole
hostname: guacamole
restart: always
volumes:
- "/data/shared/guacamole/guacamole/guac-home:/data"
- "/data/shared/guacamole/guacamole/conf:/conf:ro"
expose:
- "8080"
ports:
- "8088:8080"
network_mode: bridge
environment:
- "GUACD_HOSTNAME={my_server_hostname}"
- "GUACD_PORT=4822"
- "MYSQL_PORT=3306"
- "MYSQL_DATABASE=guacamole"
- "GUACAMOLE_HOME=/data"
- "MYSQL_USER=${my_db_user}"
- "MYSQL_PASSWORD=${my_db_password}"
- "MYSQL_HOSTNAME=${my_db_hostname}"
i have the same problem on FreeBSD 12.2 - SOLUTION
Change "localhost" hostname in
/usr/local/etc/guacamole-client/guacamole.properties
to "example"
guacd-hostname: 192.168.10.10
next: /usr/local/etc/guacamole-server/guacd.conf
[server]
bind_host = 192.168.10.10
Check /etc/guacamole/guacamole.properties i have link:
guacd-hostname: 192.168.10.10
Restart:
/usr/local/etc/rc.d/guacd restart
/usr/local/etc/rc.d/tomcat9 restart
with name "localhost" i have:
11:01:48.010 [http-nio-8085-exec-3] DEBUG o.a.g.s.GuacamoleHTTPTunnelServlet - Internal error in HTTP tunnel.
I hope it will be useful to someone else - it works for me`

docker-compose UnknownHostException : but docker run works

I have a docker image (lfs-service:latest) that I'm trying to run as part of a suite of micro services.
RHELS 7.5
Docker version: 1.13.1
docker-compose version 1.23.2
Postgres 11 (installed on RedHat host machine)
The following command works exactly as I would like:
docker run -d \
-p 9000:9000 \
-v "$PWD/lfs-uploads:/lfs-uploads" \
-e "SPRING_PROFILES_ACTIVE=dev" \
-e dbhost=$HOSTNAME \
--name lfs-service \
[corp registry]/lfs-service:latest
This successfully:
creates/starts a container with my Spring Boot Docker image on port
9000
writes the uploads to disk into the lfs-uploads directory
and connects to a local Postgres DB that's running on the host
machine (not in a Docker container).
My service works as expected. Great!
Now, my problem:
I'm tring to run/manage my services using Docker Compose with the following content (I have removed all other services and my api gateway from docker-compose.yaml to simplify the scenario):
version: '3'
services:
lfs-service:
image: [corp registry]/lfs-service:latest
container_name: lfs-service
stop_signal: SIGINT
ports:
- 9000:9000
expose:
- 9000
volumes:
- "./lfs-uploads:/lfs-uploads"
environment:
- SPRING_PROFILES_ACTIVE=dev
- dbhost=$HOSTNAME
Relevant entries in application.yaml:
spring:
profiles: dev
datasource:
url: jdbc:postgresql://${dbhost}:5432/lfsdb
username: [dbusername]
password: [dbpassword]
jpa:
properties:
hibernate:
dialect: org.hibernate.dialect.PostgreSQLDialect
hibernate:
ddl-auto: update
Execution:
docker-compose up
...
The following profiles are active: dev
...
Tomcat initialized with port(s): 9000 (http)
...
lfs-service | Caused by: java.net.UnknownHostException: [host machine hostname]
lfs-service | at
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184) ~[na:1.8.0_181]
lfs-service | at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[na:1.8.0_181]
lfs-service | at java.net.Socket.connect(Socket.java:589) ~[na:1.8.0_181]
lfs-service | at org.postgresql.core.PGStream.<init>(PGStream.java:70) ~[postgresql-42.2.5.jar!/:42.2.5]
lfs-service | at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:91) ~[postgresql-42.2.5.jar!/:42.2.5]
lfs-service | at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:192) ~[postgresql-42.2.5.jar!/:42.2.5]
...
lfs-service | 2019-01-11 18:46:54.495 WARN [lfs-service,,,] 1 --- [ main] o.s.b.a.orm.jpa.DatabaseLookup : Unable to determine jdbc url from datasource
lfs-service |
lfs-service | org.springframework.jdbc.support.MetaDataAccessException: Could not get Connection for extracting meta-data; nested exception is org.springframework.jdbc.CannotGetJdbcConnectionException: Failed to obtain JDBC Connection; nested exception is org.postgresql.util.PSQLException: The connection attempt failed.
lfs-service | at org.springframework.jdbc.support.JdbcUtils.extractDatabaseMetaData(JdbcUtils.java:328) ~[spring-jdbc-5.1.2.RELEASE.jar!/:5.1.2.RELEASE]
lfs-service | at org.springframework.jdbc.support.JdbcUtils.extractDatabaseMetaData(JdbcUtils.java:356) ~[spring-jdbc-5.1.2.RELEASE.jar!/:5.1.2.RELEASE]
...
Both methods of starting should be equivalent but obviously there's a functional difference... Any ideas on how to resolve this issue / write a comperable docker-compose file which is functionally identical to the "docker run" command at the top?
NOTE: I've also tried the following values for dbhost: localhost, 127.0.0.1 - this won't work as it attempts to find the DB in the container, and not on the host machine.
CORRECTION:
Unfortunately, while this solution works in the simplest use case - it will break Eureka & API Gateways from functioning, as the container will be running on a separate network. I'm still looking for working solution.
To anyone looking for a solution to this question, this worked for me:
docker-compose.yaml:
lfs-service:
image: [corp repo]/lfs-service:latest
container_name: lfs-service
stop_signal: SIGINT
ports:
- 9000:9000
expose:
- 9000
volumes:
- "./lfs-uploads:/lfs-uploads"
environment:
- SPRING_PROFILES_ACTIVE=dev
- dbhost=localhost
network_mode: host
Summary of changes made to docker-compose.yaml:
change $HOSTNAME to "localhost"
Add "network_mode: host"
I have no idea if this is the "correct" way to resolve this, but since it's only for our remote development server the solution is working for me. I'm open to suggestions if you have a better solution.
Working solution
The simple solution is to just provide the host machine IP address (vs hostname).
environment:
- SPRING_PROFILES_ACTIVE=dev
- dbhost=172.18.0.1
Setting this via an environment variable would probably be more portable:
export DB_HOST_IP=172.18.0.1
docker-compose.yaml
environment:
- SPRING_PROFILES_ACTIVE=dev
- dbhost=${DB_HOST_IP}

Creating organisation with 2 peers

I am trying to extend this code to add 1 more endorsing peer in the organization org1. I updated /basic-network/crypto-config.yaml as follows:
PeerOrgs:
- Name: Org1
Domain: org1.example.com
Template:
Count: 2
Users:
Count: 1
Then I regenerated all the crypto material by running generate.sh again. I updated the FABRIC_CA_SERVER_CA_KEYFILE in the basic-network/docker-compose.yaml file. I added peer1.org1.example.com in docker-compose.yaml:
peer1.org1.example.com:
container_name: peer1.org1.example.com
image: hyperledger/fabric-peer
environment:
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_PEER_ID=peer1.org1.example.com
- CORE_LOGGING_PEER=debug
- CORE_CHAINCODE_LOGGING_LEVEL=DEBUG
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/peer/
- CORE_PEER_ADDRESS=peer1.org1.example.com:7056
# # the following setting starts chaincode containers on the same
# # bridge network as the peers
# # https://docs.docker.com/compose/networking/
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=${COMPOSE_PROJECT_NAME}_basic
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb:5984
# The CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME and CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD
# provide the credentials for ledger to connect to CouchDB. The username and password must
# match the username and password set for the associated CouchDB.
- CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME=
- CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD=
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: peer node start
# command: peer node start --peer-chaincodedev=true
ports:
- 7056:7056
- 7058:7058
volumes:
- /var/run/:/host/var/run/
- ./crypto-config/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/msp:/etc/hyperledger/msp/peer
- ./crypto-config/peerOrganizations/org1.example.com/users:/etc/hyperledger/msp/users
- ./config:/etc/hyperledger/configtx
depends_on:
- orderer.example.com
- couchdb
- peer0.org1.example.com
networks:
- basic
I ran the following commands then:
set -ev
export MSYS_NO_PATHCONV=1
docker-compose -f docker-compose.yml down
docker-compose -f docker-compose.yml up -d ca.example.com orderer.example.com peer0.org1.example.com peer1.org1.example.com couchdb
All the containers are up and running without any error.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3393fc796417 dev-peer0.org1.example.com-tuna-app-1.0-b58eb592ed6ced10f52cc063bda0c303a4272089a3f9a99000d921f94b9bae9b "chaincode -peer.add…" 2 minutes ago Up 2 minutes dev-peer0.org1.example.com-tuna-app-1.0
a45d7e943068 hyperledger/fabric-tools "/bin/bash" 2 minutes ago Up 2 minutes cli
d3698fc6d3d3 hyperledger/fabric-peer "peer node start" 2 minutes ago Up 2 minutes 0.0.0.0:7056->7056/tcp, 0.0.0.0:7058->7058/tcp peer1.org1.example.com
b7c92a70fc89 hyperledger/fabric-peer "peer node start" 2 minutes ago Up 2 minutes 0.0.0.0:7051->7051/tcp, 0.0.0.0:7053->7053/tcp peer0.org1.example.com
21fabe33e3e5 hyperledger/fabric-ca "sh -c 'fabric-ca-se…" 2 minutes ago Up 2 minutes 0.0.0.0:7054->7054/tcp ca.example.com
ddaf8390c0ee hyperledger/fabric-couchdb "tini -- /docker-ent…" 2 minutes ago Up 2 minutes 4369/tcp, 9100/tcp, 0.0.0.0:5984->5984/tcp couchdb
38bdb92c6de0 hyperledger/fabric-orderer "orderer" 2 minutes ago Up 2 minutes 0.0.0.0:7050->7050/tcp orderer.example.com
Next, I need to create channel with two peers. How do I modify the below commands to add peer1 to org1?
# Create the channel
docker exec -e "CORE_PEER_LOCALMSPID=Org1MSP" -e "CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/users/Admin#org1.example.com/msp" peer0.org1.example.com peer channel create -o orderer.example.com:7050 -c mychannel -f /etc/hyperledger/configtx/channel.tx
# Join peer0.org1.example.com to the channel.
docker exec -e "CORE_PEER_LOCALMSPID=Org1MSP" -e "CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/users/Admin#org1.example.com/msp" peer0.org1.example.com peer channel join -b mychannel.block
Above two commands run successfully, but when I try to run following
docker exec -e "CORE_PEER_LOCALMSPID=Org1MSP" -e "CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/users/Admin#org1.example.com/msp" peer1.org1.example.com peer channel join -b mychannel.block
I get following error:
Error: Error getting endorser client channel: endorser client failed to connect to peer1.org1.example.com:7056: failed to create new connection: context deadline exceeded
Keep using 7051 and 7053 as internal ports for your peers, only the exposed ports must be changed :
ports:
- 7056:7051
- 7058:7053
I can't find any official documentation on this but it seems that by default it always use the address of the anchor peer of the organization. Ports used for handling incoming request for peers in an organization should be the same as the anchor peer port of that organization.
From the core.yaml config file.
# The Address at local network interface this Peer will listen on.
# By default, it will listen on all network interfaces
listenAddress: 0.0.0.0:7051
[...]
# When used as peer config, this represents the endpoint to other peers
# in the same organization. For peers in other organization, see
# gossip.externalEndpoint for more info.
# When used as CLI config, this means the peer's endpoint to interact with
address: 0.0.0.0:7051
[...]
gossip:
      # Bootstrap set to initialize gossip with.
      # This is a list of other peers that this peer reaches out to at startup.
      # Important: The endpoints here have to be endpoints of peers in the same
      # organization, because the peer would refuse connecting to these endpoints
      # unless they are in the same organization as the peer.
      bootstrap: 127.0.0.1:7051
If you really want to use another internal port (like 7056), try defining CORE_PEER_LISTENADDRESS in your docker-compose file for your new peer (defines the GRPC server's listen port) :
CORE_PEER_LISTENADDRESS=0.0.0.0:7056
Also I suggest you not to use the same CouchDb as ledger database for all your peers. Use a couchDB per peer (and it is recommended to set a username/password).
EDIT : As you don't run the peer channel join commands from your cli container, I think you have to fetch the channel through the orderer on your new peer to be able to join it (this will return the most recent configuration block for the targeted channel) :
docker exec -e "CORE_PEER_LOCALMSPID=Org1MSP" -e "CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/users/Admin#org1.example.com/msp" peer1.org1.example.com peer channel fetch config -o orderer.example.com:7050 -c mychannel
Then you should be able to join your peer to the channel using the new created block (<channelID>_config.block).

docker-compose can't connect to adjacent service via service name

I have this docker-compose.yml that basically builds my project for e2e test. It's composed of a postgres db, a backend Node app, a frontend Node app, and a spec app which runs the e2e test using cypress.
version: '3'
services:
database:
image: 'postgres'
backend:
build: ./backend
command: /bin/bash -c "sleep 3; yarn backpack dev"
depends_on:
- database
frontend:
build: ./frontend
command: /bin/bash -c "sleep 15; yarn nuxt"
depends_on:
- backend
spec:
build:
context: ./frontend
dockerfile: Dockerfile.e2e
command: /bin/bash -c "sleep 30; yarn cypress run"
depends_on:
- frontend
- backend
The Dockerfiles are just simple Dockerfiles that based off node:8 which copies the project files and run yarn install. In the spec Dockerfile, I pass http://frontend:3000 as FRONTEND_URL.
But this setup fails at the spec command when my cypress runner can't connect to frontend with error:
spec_1 | > Error: connect ECONNREFUSED 172.20.0.4:3000
As you can see, it resolves the hostname frontend to the IP correctly, but it's not able to connect. I'm scratching my head over why can't I connect to the frontend with the service name. If I switch the command on spec to do sleep 30; ping frontend, it's successfully pinging the container. I've tried deleting and let docker-compose recreate the network, I've tried specifying expose and links to the services respectively. All to no success.
I've set up a sample repo here if you wanna try replicating the issue:
https://github.com/afifsohaili/demo-dockercompose-network
Any help is greatly appreciated! Thank you!
Your application is listening on loopback:
$ docker run --rm --net container:demo-dockercompose-network_frontend_1 nicolaka/netshoot ss -lnt
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 127.0.0.11:35233 *:*
LISTEN 0 128 127.0.0.1:3000 *:*
From outside of the container, you cannot connect to ports that are only listening on loopback (127.0.0.1). You need to reconfigure your application to listen on all interfaces (0.0.0.0).
For your app, in the package.json, you can add (according to the nuxt faq):
"config": {
"nuxt": {
"host": "0.0.0.0",
"port": "3000"
}
},
Then you should see:
$ docker run --rm --net container:demo-dockercompose-network_frontend_1 nicolaka/netshoot ss -lnt
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 *:3000 *:*
LISTEN 0 128 127.0.0.11:39195 *:*
And instead of an unreachable error, you'll now get a 500:
...
frontend_1 | response: undefined,
frontend_1 | statusCode: 500,
frontend_1 | name: 'NuxtServerError' }
...
spec_1 | The response we received from your web server was:
spec_1 |
spec_1 | > 500: Server Error

Deploy app on a cluster but cannot access it successfully

I'm now learning to use docker follow get-started documents, but in part 4--Swarms I've met some problem. That is when deployed my app on a cluster, I cannot access it successfully.
docker#myvm1:~$ docker stack ps getstartedlab
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
gsueb9ejeur5 getstartedlab_web.1 zhugw/get-started:first myvm1 Running Preparing 11 seconds ago
ku13wfrjp9wt getstartedlab_web.2 zhugw/get-started:first myvm2 Running Preparing 11 seconds ago
vzof1ybvavj3 getstartedlab_web.3 zhugw/get-started:first myvm1 Running Preparing 11 seconds ago
lkr6rqtqbe6n getstartedlab_web.4 zhugw/get-started:first myvm2 Running Preparing 11 seconds ago
cpg91o8lmslo getstartedlab_web.5 zhugw/get-started:first myvm2 Running Preparing 11 seconds ago
docker#myvm1:~$ curl 'http://localhost'
curl: (7) Failed to connect to localhost port 80: Connection refused
➜ ~ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
myvm1 - virtualbox Running tcp://192.168.99.101:2376 v17.06.0-ce
myvm2 - virtualbox Running tcp://192.168.99.100:2376 v17.06.0-ce
➜ ~ curl 'http://192.168.99.101'
curl: (7) Failed to connect to 192.168.99.101 port 80: Connection refused
What's wrong?
In addition, very strange. After adding below content in docker-compose.yml I found above question resolved automatically
visualizer:
image: dockersamples/visualizer:stable
ports:
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
but this time the new added visualizer does not work
docker#myvm1:~$ docker stack ps getstartedlab
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
xomsv2l5nc8x getstartedlab_web.1 zhugw/get-started:first myvm1 Running Running 7 minutes ago
ncp0rljod4rc getstartedlab_visualizer.1 dockersamples/visualizer:stable myvm1 Running Preparing 7 minutes ago
hxddan48i1dt getstartedlab_web.2 zhugw/get-started:first myvm2 Running Running 7 minutes ago
dzsianc8h7oz getstartedlab_web.3 zhugw/get-started:first myvm1 Running Running 7 minutes ago
zpb6dc79anlz getstartedlab_web.4 zhugw/get-started:first myvm2 Running Running 7 minutes ago
pg96ix9hbbfs getstartedlab_web.5 zhugw/get-started:first myvm2 Running Running 7 minutes ago
from above you know it's always preparing.
My whole docker-compose.yml
version: "3"
services:
web:
# replace username/repo:tag with your name and image details
image: zhugw/get-started:first
deploy:
replicas: 5
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
ports:
- "80:80"
networks:
- webnet
visualizer:
image: dockersamples/visualizer:stable
ports:
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
networks:
webnet:
Had this problem while learning too.
It's because your none clustered image is still running from step 2 and the clustered image you just deployed uses the same port mapping (4000:80) in the docker-compose.yml file.
You have two options:
Go into your docker-compose.yml and change the port mapping to something else e.g 4010:80 and then redeploy your cluster with the update. Then try: http://localhost:4010
Remove the container you created in step 2 of the guide that's still running and using port mapping 4000:80
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
should be
volumes:
- /var/run/docker.sock:/var/run/docker.sock
this is an error in the dockers tutors
Open port 7946 TCP/UDP and port 4789 UDP between the swarm nodes. Use the ingress network. Please let me know if it works, thanks.
What helped for me to get the visualizer running was changing visualizer image tag from stable to latest.
If you are using Docker toolbox for mac, then you should check this out.
I had the same problem. As it says in the tutorial (see "Having connectivity trouble?") the following ports need to be open:
Port 7946 TCP/UDP for container network discovery.
Port 4789 UDP for the container ingress network.
So I executed the following before the swarm init (right after creation of myvm1 and myvm2) and then later could access the service e.g. in the browser with IP_node:4000
$ docker-machine ssh myvm1 "sudo iptables -I INPUT -p tcp --dport 7946 --syn -j ACCEPT"
$ docker-machine ssh myvm2 "sudo iptables -I INPUT -p tcp --dport 7946 --syn -j ACCEPT"
$ docker-machine ssh myvm1 "sudo iptables -I INPUT -p udp --dport 7946 -j ACCEPT"
$ docker-machine ssh myvm2 "sudo iptables -I INPUT -p udp --dport 7946 -j ACCEPT"
$ docker-machine ssh myvm1 "sudo iptables -I INPUT -p udp --dport 4789 -j ACCEPT"
$ docker-machine ssh myvm2 "sudo iptables -I INPUT -p udp --dport 4789 -j ACCEPT"
Hope it helps others.

Resources