I have been developing some Hyperledger networks with Hyperledger Composer and I have a question around the Docker containers which are created. Every time I make an update I have to "Deploy changes" which spins up a new Docker container so I have a huge list of Docker containers. I was wondering if the new deployment containers are dependent on the previous ones or can I docker rm them?
They Are independent , you can remove the old container as long as you don't use the old container that contains old smart contract.
Related
I can create a docker container by command
docker run <<image_name>>
I can create a service by command
docker service create <<image_name>>
What is the difference between these two in behaviour?
When would I need to create a service over container?
docker service command in a docker swarm replaces the docker run. docker run has been built for single host solutions. Its whole idea is to focus on local containers on the system it is talking to. Whereas in a cluster the individual containers are irrelevant. We simply use swarm services to manage the multiple containers in a cluster. Swarm will orchestrate the containers of the services for us.
docker service create is mainly to be used in docker swarm mode. docker run does not have the concept of scaling up/down. With docker service create you can specify the number of replicas to be created using the --replicas command. This will create and manage multiple replicas of a containers in many different nodes. There are several such options for managing multiple containers using docker service create and other commands under docker service ...
One more note: docker services are for container orchestration systems(swarm). It has built in facility for failure recovery. ie. it recreates a container on failure. docker runwould never recreate a container if it fails. When the docker service commands are used we are not directly asking to perform action like "create a single container", rather we are saying to the orchestration system to "put this job in your queue and when you can get to it perform that action on the swarm". This means it has rollback facilities, failure mitigation and lots of intelligence built in.
You need to consider using docker service create when in swarm mode and docker run when not in swarm mode. You can lookup on docker swarms to understand docker services.
There is no real difference. In the official documentation you can read "Services are really just containers in production".
Services can be declared in "docker-compose.yml" and can be started from it. Once started, they will run as containers.
It is just a common way to name parts of your stack.
In Hyperledger fabric each chaincode deployed runs in a separate docker container.
Hyperledger-composer, therefore, creates a new container at each upgrade of the chaincode. From my understanding composer-rest-server or any other way to interact with the composer channel always relies on the last version that has been deployed.
The framework itself does not stop containers running old chaincodes.
Should I do it manually? Is there a good reason to keep them running?
see Upgrading Hyperledger Fabric Business Network for the answer - you can stop them, yes. I suggest to read the link for more detail
Once an information is written on the Blockchain (via Hyperledger Composer or any other mean), you cannot remove it from the ledger.
Keeping the containers running old chaincodes can be considered as a mean to recover your network (for example, if you made a mistake in the ACL and you cannot access to your network anymore).
You can kill and remove old Docker containers using the following commands:
docker kill ID_OF_THE_OLD_CONTAINER
docker rm ID_OF_THE_OLD_CONTAINER
I am not new to Docker, but I am new to Docker Swarm.
Our deployments typically consist of building a new docker image with the latest code, pushing that to our registry and then running docker stack deploy against a compose file.
My question is, do I need to run docker stack rm $STACK_NAME before running the deploy?
I'm not sure if the deploy command for swarm is smart enough to figure out that a docker image has changed and that it needs to do something.
You redeploy the same stack name without deleting the old stack. If you expect to have services deleted from your compose file, then you'll want to include the --prune option. For any unchanged service, swarm will leave it unmodified. But for any services with changes, including a new image on the registry server, you will see a rolling update performed according to the update config you specify in the compose file.
When you use the default VIP to connect to a service, as long as the service exists, even across rolling updates, the VIP will keep the same IP address so that other containers connecting to your service can do so without worrying about a stale DNS reference. And with a replicated service, the rolling update can prevent any visible outage. The combination of the two give you high availability that you would not have when deleting and recreating your swarm stack.
I have a swarm setup which has around 6 nodes. Whenever I execute a docker run or docker pull command from the swarm manager it downloads the new image on all the swarm nodes.
This is creating data redundancy and choking my network.
Is there any way I can avoid this ?
Swarm Nodes need Images available to them by design. That will help swarm to start the container on an available node immediately when current node hosting the container crashes or current hosting node goes into maintenance (Drain Mode).
On the other hand docker Images will be pulled one time only, and you can use them until you upgrade your service.
Another one, Docker is designed for microservices, If you Image getting too large, Maybe you should try to cut it down to multiple containers.
After looking through docker official swarm explanations, github issues and stackoverflow answers im still at a loss on why i am having the problem that i have.
Issue at hand: docker-compose up starts services not in the swarm even though swarm is active and has 2 nodes.
Im using 1.12.1 docker version.
Looking at swarm tutorial i was able to start and scale my swarm using docker service create without any issues.
running docker-compose up with version 2 docker-compose.yml results in services starting outside of swarm, i can see them through docker ps but not docker service ls
I can see that docker-machine as the tool that solves this problems, but then again it needs virtual box to be installed.
so my questions would be
Can i use docker-compose with docker-swarm (NOT docker-engine) without docker-machine and without experimental build bundle functionality?
If docker service create can start a service on any nodes is it an indication that network configuration of the swarm is correct ?
What is the advantages/disadvantages of docker-machine versus experimental build functionality
1) No. Docker Compose isn't integrated with the new Swarm Mode yet. Issue 3656 in GitHub is tracking that. If you start containers on a swarm with Docker Compose at the moment, it uses docker run to start containers, which is why you see them all on one node.
2) Yes. Actually you can use docker node ls on the manager to confirm all the nodes are up and active, and docker node inspect to check a particular node, you don't need to create a service to validate the swarm.
3) Docker Machine is also behind the 1.12 release, so if you start a swarm with Docker Machine it will be the 'old' type of swarm. The old Docker Swarm product needed a whole lot of extra setup for a key-value store, TLS etc. which Swarm Mode does for free.
1) You can't start services using docker-compose on the new Docker "Swarm Mode". There's a feature to convert a docker-compose file to the new dab format which is understood by the new swarm mode but that's incomplete and experimental at this point. You basically need to use bash scripts to start services at the moment.
2) The nodes in a swarm (swarm mode) interact using their own overlay network. It's the one named ingress when you do docker network ls. You need to setup your own overlay network to run services in. eg:
docker network create -d overlay mynet
docker service create --name serv1 --network mynet nginx
3) I'm not sure what feature you mean by "experimental build'. docker-machine is just a way to create hosts (the nodes). It facilitates the setting up of the docker daemon on each host, the certificates and allows some basic maintenance (renewing the certs, stopping/starting a host if you're the one who created it). It doesn't create services, volumes, networks or manages them. That's the job of the docker api.