I am using docker for many different services and tools. I run a docker stack deploy -c docker-compose.yml --with-registry-auth stack_name. On the swarms themselves, only one or two of the nodes will have them images pulled and not on the others. I thought that the deploy causes all nodes to pull so that the images exist everywhere. The error that then occurs is a no such image because it wasnt pulled on that particular node. I have been looking around for help and i see many pages about how it already does this normally. Am I missing something that is causing this, any help is helpful.
I finally figured out what the problem is. When deploying a job the token it uses only stays active for however long the jobs is running. So in my script in my gitlab-ci file, i always at least pulled the image on the first node so it always worked there. This made it so at least one node had the image. To get them on the other nodes i had to add a sleep so that the other nodes had enough time to pull the image. It was a race condition, the token became useless after the job ended and couldnt pull any images.
Related
I want to make a faster deployment process than before. Always too much time spent in this step.
But I can't find any way to see detailed docker logs such as Downloading, Pulling Images, Starting Containers, ... etc. I want to see it in the machine; I want to debug it. How to check this?
These will be in various places.
docker events will show you each action the scheduler takes, and any actions on the node you've run that command on. You'll need to run this on all potential nodes while creating/updating a service to get a full accounting of manager and worker events.
On the node that's been assigned a task to create a container, the docker debug flag may give you more insight.
I have a problem with Docker that seems to happen when I change the machine type of a Google Compute Platform VM instance. Images that were fine fail to run, fail to delete, and fail to pull, all with various obscure messages about missing keys (this on Linux), duplicate or missing layers, and others I don't recall.
The errors don't always happen. One that occurred just now, with an image that ran a couple hundred times yesterday on the same setup, though before a restart, was:
$ docker run --rm -it mbloore/model:conda4.3.1-aq0.1.9
docker: Error response from daemon: layer does not exist.
$ docker pull mbloore/model:conda4.3.1-aq0.1.9
conda4.3.1-aq0.1.9: Pulling from mbloore/model
Digest: sha256:4d203b18fd57f9d867086cc0c97476750b42a86f32d8a9f55976afa59e699b28
Status: Image is up to date for mbloore/model:conda4.3.1-aq0.1.9
$ docker rmi mbloore/model:conda4.3.1-aq0.1.9
Error response from daemon: unrecognized image ID sha256:8315bb7add4fea22d760097bc377dbc6d9f5572bd71e98911e8080924724554e
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
$
So it thinks it has no images, but the Docker folders are full of files, and it does know some hashes. It looks like some index has been damaged.
I restarted that instance, and then Docker seemed to be normal again without any special action on my part.
The only workarounds I have found so far are to restart and hope, or to delete several large Docker directories, and recreate them empty. Then after a restart and pull and run works again. But I'm now not sure that it always will.
I am running with Docker version 17.05.0-ce on Debian 9. My images were built with Docker version 17.03.2-ce on Amazon Linux, and are based on the official Ubuntu image.
Has anyone had this kind of problem, or know a way to reset the state of Docker without deleting almost everything?
Two points:
1) It seems that changing the VM had nothing to do with it. On some boots Docker worked, on others not, with no change in configuration or contents.
2) At Google's suggestion I installed Stackdriver monitoring and logging agents, and I haven't had a problem through seven restarts so far.
My first guess is that there is a race condition on startup, and adding those agents altered it in my favour. Of course, I'd like to have a real fix, but for now I don't have the time to pursue the problem.
I am new to docker container. I met a problem about how to catch changes of my code. Some lines in my local style.css file have been changed. Then I built docker image again, but actually nothing changed when I browsed my app.
Here are some methods I found online and tried, but didn't work.
remove image and build again
--no-cache=true
add a comment in Dockerfile to make it different
docker system prune
--pull
(I also used git pull to get code on my cloud instance, these files were checked to be the latest.)
I know little about docker mechanism, could anyone tell me what the problem is?
Extra info I found:
After stopping container and removing image, I restart my instance, then build image, run container again. Only in this way, can I catch those changes. Does anyone know the problem?
Many thanks!
There appears to be a disconnect on the difference between a container and an image. The container is the running instance of your application. It is based on an image that you have already built. That image has a tag for easier referencing, but the real reference to the image is a sha256 hash and building a new image will change the hash that the tag points to without impacting any of your running containers.
Therefore, the workflow to update your running application in docker is to:
Build a new image
Stop the running container
Start a new container pointing to that image
Cleanup any old images or stopped containers
If you are using docker-compose, it automates the middle two steps with a docker-compose up command, and that even deletes the old container. Most users keep a few copies of older images to allow easy rollback.
I played around with Docker and followed this Tutorial.
Using the docker functionality in Plesk, I pulled my created container from docker hub and ran it. When trying to remove it again, it threw an error message, which I didnt capture (didnt expected anything strange at that point). Then, when going back to the container overview it gave me the screen shown below.
Now, the container in the middle is mine (get-started), however, where the hell did angry_kare and peaceful_haibt come from?
Thank you guys for any answers or ideas! :)
(Im am currently not able to reproduce this :/)
Random containers spawned
When you run the container without a name. Docker will create containers with random names. The two containers which your having are the previously failed containers.
I've read a bit about docker and have read only about the following procedure:
Build container using Dockerfile
Push container to repo/server
Run container
new version, go to step 1)
The claim is that this is quick, which it sometimes is. However, often in these recipes, I'll see a 'git pull' or 'add ' step followed by a bundle install or some other preparation step. If you always do this, you throw away a fair amount of progress and start the processes as if you had never installed your app in the first place, though it won't have to reinstall any prerequisites. Not to mention, you have to upload a bunch of big images to your server - much of which is duplicate stuff anyway.
It occurs to me that a better procedure might be to treat your local Docker instance as a staging server, ssh-ing in (to get magical SSH user agent forwarding to work more reliably), updating code, testing, then committing the changes and pushing it up to whatever cloud service runs your docker instances.
Am I missing something? Is this what everyone actually does, but doesn't really write about (because it's more complex)? Or have I just not stumbled on to the article that talks about this?
Docker containers generally follow the "golden image" rule - if you need to update it, you create a new image.
Because Docker caches the intermediate results of building Dockerfiles, subsequent builds tend to be fast. Also, the building of images tends to be automated (e.g. see automated builds on the Docker Hub), so as long as it doesn't take a really long time, it's not a problem.
You really don't want to be making changes to to containers manually as you will lose track of the changes and become unable to recreate them.