When I use 'docker service update' on a peer container in my docker swarm, the peer get's replaced by a new one.
The new one has almost the same name e.g.
old: peer1.org1-223d2d23d23 new: one peer1.org1-345245634ff4
It has access to all files like channel.tx, genesis.block and mychannel.block. in the peer/channel-artifacts map. But the new peer has not joined the channel and no chaincode is installed on it.
I can't join the channel or install chaincode, because for peer1.org1 it already is the case. However if I fetch the oldest channel block I can. But this gives a strange situation I think.
So my question is
How can a peer service in docker swarm still be part of the stack/swarm after an service update or downtime without it being a completely new peer container?
When you upgrade a container in Docker, Docker Swarm or Kubernetes, you are essentially replacing the container (i.e. there is really no concept of an in-place upgrade of the container) with another one which receives the same settings, environment, etc.
When running Docker in standalone mode and using volumes, this is fairly transparent as the new container is deployed on the same host as the prior container and therefore will mount the same volumes, etc.
It seems like you are already mounting some type of volume from shared storage / filesystem in order to access channel.tx, etc.
What you also need to do is actually make sure that you use volumes for the persistent storage used / required by the peer (and orderer, etc for that matter).
On the peer side, the two key attributes in core.yaml are:
peer.fileSystemPath - this defaults to /var/hyperledger/production and is where the ledger, installed chaincodes, etc are kept. The corresponding environment variable is CORE_PEER_FILESYSTEMPATH.
peer.mspConfigPath - where the local MSP info is stored. The corresponding environment variable is CORE_PEER_MSPCONFIGPATH.
You will want to mount those as volumes and given you are using Swarm those volumes will need to be available on a shared storage which is available on all of your Swarm hosts.
Related
First of all, to mention I checked multiple questions in Stackoverflow but the requirement was somewhat different or the solution offered was not working in my case, so thought of creating a new question.
I am using Local Registry and able to push/pull image from all worker nodes as well
Registry service is up and running in all nodes
Issue while creating service of local image that I already pushed to the docker.
Issue:
overall progress: 0 out of 1 tasks
1/1: ready [======================================> ]
verify: Detected task failure
Steps I have done:
docker service create --name registry --publish 5000:5000 armbuild/registry (mine is raspi so used armbuild)
docker tag XYZImage localhost:5000/XYZImage -> Working Fine
docker push localhost:5000/XYZImage -> Working Fine
docker service create --name XYZService --replicas 2 localhost:5000/XYZImage --> Issue
Note: Even I tried using IP address and adding that address to insecure registries in daemon.json file.
Any leads? or if I am missing something?
Each container or node is writing to its own volume (/var/lib/registry). You must use a distributed storage driver if you want to use replicas.
From the documentation:
The storage back-end you use determines whether you use a fully scaled
service or a service with either only a single node or a node
constraint.
If you use a distributed storage driver, such as Amazon S3, you can
use a fully replicated service. Each worker can write to the storage
back-end without causing write conflicts.
If you use a local bind mount or volume, each worker node writes to
its own storage location, which means that each registry contains a
different data set. You can solve this problem by using a
single-replica service and a node constraint to ensure that only a
single worker is writing to the bind mount.
I have setup a 3 node cluster (with no Internet access) with 1 manager and 2 worker-nodes using the standard swarm documentation.
How does the swarm manager in swarm mode know about the images present in worker nodes?
Lets say I have image A in worker-node-1 and image B in worker-node-2 and no images in the manager-node.
Now how do I start container for image A using the manager?
Will it start in manager or node-1?
When I query manager for the list of images will it give the whole list with A and B in it?
Does anyone know how this works?
I couldn’t get the details from the documentation.
Docker Swarm manager node may to be a worker one by the second role but not strictly necessary.
Image deployment policy is mapped via docker-compose.yml which has an information like target nodes, networks, hostnames, volumes, etc. in relation of particular service. So, it will start either in specified node or in emptiest default one.
Swarm manager communicates with the worker nodes via Docker networks:
When you initialize a swarm or join a Docker host to an existing swarm, two new networks are created on that Docker host:
an overlay network called ingress, which handles control and data
traffic related to swarm services. When you create a swarm service and
do not connect it to a user-defined overlay network, it connects to
the ingress network by default
a bridge network called
docker_gwbridge, which connects the individual Docker daemon to the
other daemons participating in the swarm.
Reference
During Swarm deployment, the images of it's services are being propagated to worker nodes according to their deployment policy.
The manager node will contain images once the node is the worker one too (correct me, if it won't).
The default configuration with swarm mode is to pull images from a registry server and use pinning to reference a unique hash for those images. This can be adjusted, but there is no internal mechanism to distribute images within a cluster.
For an offline environment, I'd recommend a stand alone registry server accessible to the cluster. You can even run it on the cluster. Push your image there, and point your server l services to the registry for their images to pull. See this doc for details on running a stand alone registry, or any of the many 3rd party options (e.g. Harbor): https://docs.docker.com/registry/
The other option is to disable the image pinning, and manually copy images to each of your swarm nodes. You need to do this in advance of deploying any service changes. You'll also lose the benefit of reused image layers when you manually copy them. Because of all this issues it creates, overhead to manage, and risk of mistakes, I'd recommend against this option.
Run the docker stack deploy command with --with-registry-auth that will give the Workers access to pull the needed image
By default Docker Swarm will pull the latest image from registry when deploying
I have a Hyperledger Fabric network running with Docker swarm. I want to test Fabric by taking some peers down and see if the network still functions.
When the network is running I stop/start a peer container. Then I use the 'docker service update $peer-service --force' command to see if it goes back to being a service. Docker then makes a different new container and adds it to the service.
The new container has not joined the channel and has no chaincode installed on it. The first container still exist but is not part of the swarm anymore. I think it will be very inconvenient to manually install everything on a peer when it goes down on an already running network with many chainscodes.
Is there a way to join the old peer container back as the same service to the swarm?
You need to use volumes so that the block/channel data persists. So map the directory in the peer container that contains this information to a directory on your host machine.
I have a swarm cluster with one manager and another normal node , when I create a swarm service I am creating with mount type ,mount source and mount target . It creates the volume with the same name in both manger and node and starts the container and my service is up.
When I release the service the volume created along with the service was not deleted, this is still fine.
The problem I am facing is when I delete the volume with the same endpoint it's only deleting the volume in swarm manager, the volume created in the node while creating the service still exists.
I want the manager to delete all the volumes which is created along with the swarm service. Is there a way ??
After so much of analysis here is the theory.
if you are instructing swarm to create the service with volume, Swarm is only performing actions on creating the services inside the cluster i.e on the multiple nodes yes when you send the volume details yes it does creates the volume as well but while releasing the service it fails to check in the worker nodes for existence of volume while releasing Its the bug in docker
I have raised the bug in docker for it.
As of now there is no other way than manually releasing the volume from worker nodes after releasing the swarm service .
As far as I know a volume is only created on nodes where a container is created. Is it possible that your service fails to start on one node, ends up on the other and somehow swarm doesn't clean up? If thats the case write an issue in github.
Update (from comments):
According to the docker service create documentation:
A named volume is a mechanism for decoupling persistent data needed by your container from the image used to create the container and from the host machine. Named volumes are created and managed by Docker, and a named volume persists even when no container is currently using it. Data in named volumes can be shared between a container and the host machine, as well as between multiple containers. Docker uses a volume driver to create, manage, and mount volumes. You can back up or restore volumes using Docker commands.
So if you are using named volumes the correct answer would be why are they removed on the manager and where they ever created there?
I am playing with docker's 1.12 swarm with Orchestration! But there is one issue I am not able to find an answer to:
In this case if you're running a service like nginx or redis you don't worry about the data persistence,
But if you're running a service like a database we need data persistance so if something happens to your docker instance the master will shuttle the docker instance to one of the available nodes, by default docker doesn't move data volumes to other nodes to address this problem. We can use third party plugins like Flocker (https://github.com/ClusterHQ/flocker), Rexray ("https://github.com/emccode/rexray") to solve the issue.
But the problem with this is: when one node fails you lose the data. Flocker or Rexray does not deal with this.
We can solve this if we use something like NFS. I mount the same volume to across my nodes in this case we don't have to move the data between two nodes. If one of the nodes fail its need to remember the docker mount location, can we do this? If so can we achieve this with docker Swarm Built-In Orchestration!
Using Rexray, then the data is stored outside the docker swarm nodes (in Amazon S3, Openstack Cinder, ...). So If you loose a node, you won't loose your persistent data. If your scheduler mounts a new container which needs the data on another host, it will retrieve the external volume using rexray plugin and you're ok to go.
Note: your external provider needs to allow you to perform forced detach of the volume from the now unavailable old nodes.