update swarm when a new build I ready from docker hub - dockerhub

Reading this regarding webhooks is it possible to have a swarm service to update when a new build is ready?
That is via automated-build.
If you can just help with a direction.

Related

How to trigger a docker swarm service update (remotely)?

I'm running docker services as stacks on docker systems in swarm mode. To perform rolling updates in the ci/cd pipeline I used the portainer feature: Webhooks. But whenerver I have to update the portainer itself (or the loadbalancer) the webhook gets updated and I have to update all pipelines with the new GUID.
Is there any other way to trigger this update? - Maybe something like Watchtower or DockerUpdater but which supports docker swarm services?
It doesn't need to be triggered via webhook or remotely. Could also be done like watchtower or maybe traefik - configured with labels.

Docker swarm scale and new swarm nodes issue

I’m testing docker swarm on a multi node cluster.
version: 20.10.7
The point is that if I create a service with docker service create and then I join nodes everything works (I use --with-registry-auth, master node is logged in to a private registry on AWS) , it means applications are replicated on nodes with image pull and containers start.
I kill the nodes manually and scale the service to 0 with:
docker service scale myserv=0
then I when I start a new node, join it to the cluster and try scale up, the image on the node is not pulled down, it says “no such image”,
that’s strange since if I re-create the service it is able to pull the image on nodes. It is like docker service scale doesn’t login to the remote registry in the nodes.
Any tips to solve this out? it would be nice to add nodes/remove nodes and have containers scaled automatically as from the scale istruction of the service I've created.
Thanks
we do use docker swarm in our production extensively, we use ecr credential helper, check out the link amazon-ecr-credential-helper
Link to github : git hub project link
As described you can have a startup script on the nodes to store your credentials

Docker Swarm Deployment Hyperledger Fabric

I am facing an issue with docker swarm .I am trying to deploy a network over AWS using docker swarm .All of my services are working and running fine after docker deploy stack command.
I have two AWS instances. I deploy orderer and 2 peer of an organisation on first AWS instance. On the other side 2 peer of another organisation has been deployed on second AWS instance with cli also.
All the service running on instance 1 is able to communicate each other .And all the services on instance 2 are able to communicate each other . But if i try to connect any service from other instance then no luck.
ANy idea what is happening there..
You can refer the below Github repo, it should help you to understand better how to implement Docker swarm in hyperledger Fabric
https://github.com/sebastianpaulp/Balance_Transfer_Docker_Swarm

How I can update my container image in bluemix, without stop my actual service that use this container?

How I can update my container image in bluemix, without stop my actual service that use this container?
Normally I stop > removing > recreate a new container. That's is not very productive.
IBM Bluemix Container Service now combines Docker and Kubernetes. Kubernetes is an open source project for managing infrastructure that you can deploy to Bluemix. Zero-downtime deployment is a native feature of Kubernetes.
You should consider migrating your old containers to the new IBM Container Service.
For more information on creating clusters:
https://console.bluemix.net/docs/containers/container_index.html#clusters
Managing rolling deployments with IBM Bluemix Container Service
https://console.bluemix.net/docs/containers/cs_apps.html#cs_apps_rolling

Jenkins docker plugin and linked slaves

I wanted to be able to start multiple linked containers on demand, with a restrict where this build run tag like I do with docker plugin for one single container.
I'm currently running Jenkins inside a docker container and configured a slave cloud using docker plugin to provide a single slave container per job, this provisioning is done on demand by the plugin.
But now I have some new requirements, example:
Starting nodejs application container linked to selenium grid container for protractor e2e testing
Starting a container with a nodejs application linked to a redis server in another container.
Currently, docker plugin does not support linked containers so how should I approach those scenarios?
I know how to start multiple linked containers with docker-compose but there are currently no Jenkins plugins for compose.
I was able to get docker-in-docker working, and thought about having a DIND job with using compose in a pre-setup, but I'm finding this a quite inelegant solution.
Is there a plugin-wise solution?
Docker Slaves Plugin new version's side container feature solves that problem now!

Resources