I am facing an issue with docker swarm .I am trying to deploy a network over AWS using docker swarm .All of my services are working and running fine after docker deploy stack command.
I have two AWS instances. I deploy orderer and 2 peer of an organisation on first AWS instance. On the other side 2 peer of another organisation has been deployed on second AWS instance with cli also.
All the service running on instance 1 is able to communicate each other .And all the services on instance 2 are able to communicate each other . But if i try to connect any service from other instance then no luck.
ANy idea what is happening there..
You can refer the below Github repo, it should help you to understand better how to implement Docker swarm in hyperledger Fabric
https://github.com/sebastianpaulp/Balance_Transfer_Docker_Swarm
Related
I have been playing around with Hyperledger to make it run on Kubernetes. And I was successful to do so. The only thing which I was not happy with the solution/work-around for the container that was spun up when chaincode is instantiated by the peer.
Kubernetes is simply not aware of this container as it was not started by Kubernetes and by the peer. And to make the peer and chaincode talk to each other I had to update the docker daemon running on the kubernetes node with dns server ip address of the kube-dns service.
Is it possible to instantiate a chaincode in a way where kubernetes is aware of the container of the chaincode.
And also chaincode container is able to talk to peer in a seamless fashion rather than updating docker daemon process of the node within kubernetes cluster
I have been investigating the same issue you are having. One alternative to using the docker daemon on your kubernetes node is spinning up a new container in your Pod using DnD (Docker in docker) technique. In this way you can successfully instantiate the chaincode container in a natural way (you will be able to use KubeDNS for example) as it will be sharing the same network space as the kubernetes Pod. I couldn't find any tutorial on the internet showing the implementation of this theory but if you find one (or do it yourself) please share it on this thread.
Thank you
Reference:
https://medium.com/kokster/simpler-setup-for-hyperledger-fabric-on-kubernetes-using-docker-in-docker-8346f70fbe80
I've got a docker-compose.yml which, when deployed locally as either using stack or compose yields 3 services (parse-server, mongodb, web-app in nginx). I can get logs from those services using docker service logs <id>.
Using the same docker-compose.yml to deploy the stack to Amazon EC2, docker service logs <id> calls to the running services returns nothing. As if I were cat'ing an empty file.
Does anybody know what could cause this and / or how I can fix it?
When you deploy a swarm to AWS using the Docker Docs buttons or via cloud, I believe it usually pipes all output to CloudWatch, organized by individual container. This is only helpful if that is how you created your swarm.
I installed Nginx ECS Docker container service through AWS ECS, which is running without any issue. However, every other container services such as centos, ubuntu, mongodb or postgres installed through AWS ECS keeps restarting (de-registering, re-registering or in pending state) in a loop. Is there a way to install these container services using AWS ECS without any issue on AMI Optimized Linux? Also, is there a way to register Docker containers in AWS ECS that was manually pulled and ran from Docker Hub?
Usually if a container is restarting over and over again its because its not passing the health check that you setup. MongoDB for example does not use the HTTP protocol so if you set it up as a service in ECS with an HTTP healthcheck it will not be able to pass the healthcheck and will get killed off by ECS for failing to pass the healthcheck.
My recommendation would be to launch such services without using a healthcheck, either as standalone tasks, or with your own healthcheck mechanism.
If the service you are trying to run does in fact have an HTTP interface and its still not passing the healthcheck and its getting killed then you should do some debugging to verify that the instance has the right security group rules to accept traffic from the load balancer. Additionally you should verify that the ports you define in your task definition match up with the port of the healthcheck.
Okay, here is my situation:
I created a cluster of docker swarm using docker machine. I can deploy any container, etc. So basically everything is working fine. My question right now is how to give access to the cluster to someone else. I want other people to deploy container on that cluster using docker-compose.
Docker machine configures the docker engine on each node to be secured using TLS:
https://docs.docker.com/engine/security/https/
The client configuration can be seen when running the "docker-machine config" command, for example the following settings are used to access the remote docker host:
--tlsverify
--tlscacert="~/.docker/machine/certs/ca.pem"
--tlscert="~/.docker/machine/certs/cert.pem"
--tlskey="~/.docker/machine/certs/key.pem"
-H=tcp://....
It's the files under ~/.docker/machine/certs that are needed by other users who want to connect to your swarm.
I expect that docker will eventually create some form of user authentication and authorization.
Currently I have a bunch of RHEL7 VMs running on RackSpace and want to deploy docker swarm for testing purpose. The Docker Docs only describes the method to deploy docker swarm by using docker machine.
Question:
Since VirtualBox cannot be used in VMs, are any other ways such that I can directly deploy docker swarm on my VMs without using docker machine?
In fact Docker documentation offers you how to set up a swarm cluster 'manually' without using docker-machine: Create a swarm for development
I think that this full step-by-step tutorial might be useful.
It details how to deploy Swarm with a multi-hosts network, without Docker-machine by using consul and suggest two different means for the Swarm agent discovery (static file and token).