Possible to communication between two host in hyperledger composer - hyperledger

I am new in block chian development. Now i need to know any ways are available for communicate with multiple host.
That is
I host peer1 in pc1
I host peer2 in pc2
Any ways are available for communicate the both peer.
OS: Ubuntu 16.04
Composer: 0.16.6
Fabric :1.0.4
Thanks

You can use Docker Swarm to configure a blockchain network in a multiple host network. Please refer to the below post.
https://medium.com/#wahabjawed/hyperledger-fabric-on-multiple-hosts-a33b08ef24f

Related

Cannot join Docker manager node in Windows using tokens

My friend and I are trying to connect our Docker daemon using Docker Swarm. We both are using Windows OS and we are NOT on the same network. According to Docker docs each docker host must have the following ports open;
TCP port 2377 for cluster management communications
TCP and UDP port 7946 for communication among nodes
UDP port 4789 for overlay network traffic
We both have added new rules for the given ports in inbound and outbound rules in the firewall. Though we keep getting the same two errors while trying to join using token created by the manager node using docker swarm join --token command;
1. error response from daemon: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial tcp 192.168.65.3:2377: connect: connection refused"
2. Timeout error
Also, if either of us runs docker swarm init it shows 192.168.65.3 IP address that isn't part of any network we're connected to. What does it mean?
Docker overlay tutorial also states that in order to connect to the manager node, the worker node should add the IP address of the manager.
docker swarm join --token \ --advertise-addr IP-ADDRESS-OF-WORKER-1
IP-ADDRESS-OF-MANAGER:2377
Does it mean that in our case we have to use public IP address of the manager node after enabling port forwarding?
Potential network issues aside, here is your problem:
We both are using Windows OS
I have seen this issue in other threads when attempting to use Windows nodes in a multi-node swarm. Here are some important pieces of information from the Docker overlay networks documentation:
Before you can create an overlay network, you need to either initialize your Docker daemon as a swarm manager using docker swarm init or join it to an existing swarm using docker swarm join. Either of these creates the default ingress overlay network which is used by swarm services by default.
Overlay network encryption is not supported on Windows. If a Windows node attempts to connect to an encrypted overlay network, no error is detected but the node cannot communicate.
By default, Docker encrypts all swarm service management traffic. As far as I know, disabling this encryption is not possible. Do not confuse this with the --opt encrypted option, as that involves encrypting application data, not swarm management traffic.
For a single-node swarm, using Windows is just fine. For a multi-node swarm, which would be deployed using Docker stack, I highly recommend using Linux for all worker and manager nodes.
A while ago I was using Linux as a manager node and Windows as a worker node. I noticed that joining the swarm would only work if the Linux machine was the swarm manager; If the Windows machine was the manager, joining the swarm would not work. After the Windows machine joined the swarm, container-to-container communication over a user-defined overlay network would not work no matter what. Replacing the Windows machine with a Linux machine fixed all issues.

How docker containers expose services?

I'm deploying a stack of services through the command:
docker stack deploy -c <docker-compose.yml> <stack-name>
And I'm mapping ports of one of these services on docker compose with ports: 8000:8000.
The network driver being used is overlay.
I can access these services via localhost:8000, via Peers IP(?).
When I inspect the network created, I can see the local IPs of each container (for instance, 10.0.1.2). But Where is the external IP of container (the one like 172.0. ...) ?
I am running these docker container on a virtual machine ubuntu.
How can I access the services running on containers from other nodes running on other networks? Isn't possible to access via hostIP:port?
If so, how do I get the host IP? When I do docker-machine IP I get "host is not running".
[EDIT: I wasn't doing port mapping between the host and the VM in virtualbox. Now it works!]
Whats the best way to communicate between containers on the same swarm?
Thanks
Whats the best way to communicate between containers on the same swarm? Through name discovery?
In general if you communicate between containers you should use the container/service name.
And for your other problem you probably wan't a reverse proxy like nginx or traefik.

ELK containers are unable to connect on different nodes when firewall is running

I have a ELK docker swarm setup running across 4 different hosts. I am able to ping the containers which are on different host but when I try to run curl commands it is not connecting curl http://elastic:9200. Logstash and kibana applications are unable to connect to elasticsearch containers (3 node es cluster) which are on different hosts. I have opened all the ports mentioned in docker swarm documentation across all hosts https://docs.docker.com/engine/swarm/swarm-tutorial/#the-ip-address-of-the-manager-machine but no luck. After stopping firewall on all hosts, LS/Kibana are able to connect to elasticsearch.
Attempted to resurrect connection to dead ES instance, but got an error Unable to connect to Elasticsearch at http://es-proxy:9200/
Has anyone experienced this issue? Thanks.
I was able to resolve the issue by adding my docker network interface into a trusted zone.
https://success.docker.com/article/firewalld-problems-with-container-to-container-network-communications
Lastly, I added my docker subnet as a trusted source.
firewall-cmd --permanent --zone=trusted --add-source=subnet/range

Creating a docker instance on connection to a WiFi network

Is it possible to spin up a docker instance on connection to a WiFi network?
I am looking to create sandbox environment on connection to a network and figured docker was the most suitable technology to do this.
With a native Linux install, WiFi is just another NIC to the kernel. Docker will bridge connections to this NIC without issue.
With Docker for Windows and Docker for Mac, the embedded Linux VM may need to be restarted to pickup networking changes if you change your networking environment (new DNS server, etc).

Is it possible to run a VPN client inside a docker container?

Is it possible to run a VPN client inside a docker container?
And if it is, then will it be possible to communicate between the host and the container?
An example of the architecture -
Host <-> Container <-> VPN
172.0.0.1 172.0.0.3 & 222.104.0.105 222.106.3.5
Thanks in advance!
Configuration example of such scenario can be found here: https://greenfrognest.com/LMDSVPN.php
It is based on a specific VPN docker container client (dperson/openvpn-client), but as far I can see it can be configured using any VPN provider.
Associated YouTube video with above instructions can be found here
Yes it is possible to run openvpn or such a container, you will find many in the docker hub, look at http://registry.hub.docker.com and if you docker run --net=host it will communicate with the host

Resources