The docker-compose utility is attached to the terminal by default allowing you to see that's happening with all of your containers which is very convenient for development. Does the docker stack deploy command support something like this when the activity of the running containers gets rendered in one terminal in real time?
According to Docker website the only log displayed is:
docker stack deploy --compose-file docker-compose.yml vossibility
Ignoring unsupported options: links
Creating network vossibility_vossibility
Creating network vossibility_default
Creating service vossibility_nsqd
Creating service vossibility_logstash
Creating service vossibility_elasticsearch
Creating service vossibility_kibana
Creating service vossibility_ghollector
Creating service vossibility_lookupd
However, there's a command which displays the logs:
docker service logs --follow
Therefore, on a Linux system you could combine both commands and you will get the desired output
What you're looking for is a merged output of the logs ("attached" for a stack deploy is a different thing with progress bars).
You can't get the logs for the full stack just yet (see issue #31458 to track the progress of this request), but you can get the logs for all of the containers in a service with docker service logs.
Related
I am using docker-compose to deploy an application combining a number of different images.
Using Docker version 18.09.2, build 6247962
Docker-compose 1.117
Primarily, I have
ZooKeeper
Kafka
MYSQLDb
I notice a strange problem where i could not start my application with docker-compose up due to port already being assigned. I then checked docker stats and saw that there were three containers named "test_ZooKeeper.1slehgaior"
"test_Kafka.kgjdorgsr"
"test_MYSQLDB.kgjdorgsr"
I have tried kill the containers, removing them and pruning the system. When ever I kill one of these containers, it instantly restarts and I cannot for the life of me determine where they are being created from!
Please help :)
If you look into your docker-compose.yaml I'm pretty sure you'll find a restart:always somewhere. If you want to correctly shut down a running docker container managed by docker-compose, one way is to use docker-compose down from the directory where your yaml sits.
More information on the subject:
https://docs.docker.com/config/containers/start-containers-automatically/
Otherwise, you might try out to stop a single running container instead of killing it, which according to my memory tells docker not to restart it again, while a killed container looks to the service like it just has crashed. Not too sure about the last part though.
I have a swarm stack deployed and I removed couple services from the stack and tried to deploy them again. these services are showing with desired state remove and current state preparing .. also their name got changed from the custom service name to a random docker name. swarm also trying to start these services which are also stuck in preparing. I ran docker system prune on all nodes and them removed the stack. all the services in the stack are not existent anymore except for the random ones. now I cant delete them and they still in preparing state. the services are not running anywhere in the swarm but I want to know if there is a way to remove them.
I had the same problem. Later I found that the current state, 'Preparing' indicates that docker is trying to pull images from docker hub. But there is no clear indicator in docker service logs <serviceName> available in the docker-compose-version above '3.1'.
But it sometimes imposes the latency due to n\w bandwidth or other docker internal reasons.
Hope it helps! I will update the answer if I find more relevant information.
P.S. I identified that docker stack deploy -c <your-compose-file> <appGroupName> is not stuck when switching the command to docker-compose up. For me, it took 20+ minutes to download my image for some reasons.
So, it proves that there is no open issues with docker stack deploy,
Adding reference from Christian to club and complete this answer.
Use docker-machine ssh to connect to a particular machine:
docker-machine ssh <nameOfNode/Machine>
Your prompt will change. You are now inside another machine. Inside this other machine do this:
tail -f /var/log/docker.log
You'll see the "daemon" log for that machine. There you'll see if that particular daemon is doing the "pull" or what's is doing as part of the service preparation. In my case, I found something like this:
time="2016-09-05T19:04:07.881790998Z" level=debug msg="pull progress map[progress:[===========================================> ] 112.4 MB/130.2 MB status:Downloading
Which made me realise that it was just downloading some images from my docker account.
I am trying to run Docker image from the Docker Hub in the Azure Container Instances but deployment always fails in some reason. The repository on Docker Hub is public. The service says that the image has been successfully pulled but it pulls it again and again and the state of the container is always "Waiting". The image must not be broken because I can create the container and use it locally without any problems.
(please ignore different tags on screenshot)
What could be a reason?
Is the default command for your container a long-running process? Usually, this behavior indicates that the container is starting and immediately exiting, triggering the service to try and start it again, over and over.
I have a swarm cluster wherein different technology dockers are deployed. (Zookeeper, Kafka, Elastic, Storm and custom web application)
Web application goes under tremendous changes and have to update the stack everytime web docker changes. Once in a while there will be updates to Elasticsearch image.
When i run docker stack deploy, it goes and restarts all existing docker services which are not even changed. This hampers whole stack and there is a downtime for whole application. Docker stack does not have option of update.
Someone has solution for this?
docker service update --image does the trick.
Check the docker service update docs.
Redeploying the stack with changed configuration(docker-compose.yml file) solves the problem see https://docs.docker.com/engine/reference/commandline/stack_deploy/#extended-description.
There they stated "Create and update a stack from a compose or a dab file on the swarm." Also i dont see any command like 'docker stack update
'. So this can solve the problem.
If you have docker stack created from compose.yml and you need to re-deploy only one service from stack, just do this:
docker service rm <your-service>
and then:
docker stack deploy -c compose.yml <stack-name>
And you just will update your stack, not recreate all services.
I am currently trying to deploy a basic task queue and frontend using celery, rabbitmq and flower on Kubernetes (and minikube). I am following the example here:
https://github.com/kubernetes/kubernetes/tree/release-1.3/examples/celery-rabbitmq
I can get everything to work following the instructions; however, when I run docker build on the Dockerfile in ./celery-app-add, push the image to my own repository and replace endocode/celery-app-add with <mine>/celery-app-add, I can't get the example to run anymore. I am assuming that the Dockerfile in source control is wrong because if I pull the endocode/celery-app-add image and run bash in the image, it loads in as the root user (as opposed to user with <mine>/celery-app-add Dockerfile).
After booting up all of the containers and services, I can see the following in the logs:
2016-08-18T21:05:44.846591547Z AttributeError: 'ChannelPromise' object has no attribute '__value__'
The celery logs show:
2016-08-19T01:38:49.933659218Z [2016-08-19 01:38:49,933: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**#rabbit:5672//: [Errno -2] Name or service not known.
If I echo RABBITMQ_SERVICE_SERVICE_HOST within the container, it appears as the same host as indicated in the rabbitmq-service after running kubectl get services.
I am not really sure where to go from here. Any suggestions are appreciated. Also, I added USER root (won't run this in production, don't worry) to my Dockerfile and still ran into the same issues above. docker history endocode/celery-app-add hasn't been too helpful either.
Turns out the problem is based around this celery issue. Celery prefers to use CELERY_BROKER_URL over anything that can be set in the app configuration. To fix this, I unset CELERY_BROKER_URL in the Dockerfile and it picked up my configuration correctly.