i reckon that there are lots of questions regarding this issue, but i've tried pretty much everything and it doesn't work for my case.
I'm a beginner in Docker, and I have cloned a docker from github which deploys a webapp using nginx, the first build was successful, i got to access localhost:381 and find it, but when i changed some of the code and rebuilt it (after running docker-compose down i ran docker-compose up --build --no-cache), i still cannot see the updates. I thought the image was not updated so i deleted it, deleted the container and the volume and i can still access the webapp still from localhost:381. I rebuilt it again and i still can't see the updates.
Any help?
Run
docker-compose up --force-recreate
this will recreate all current containers using the new code.
Related
When I deploy a stack(docker-compose from github) on any environment, I will get a "Error:No such image" during the deployment. This happens when portainer needs to pull the image. If the image is already present it will not throw the error.
I have confirmed these images do in fact exist on dockerhub.
I can successfully pull these images manually via the images tab on the respective environment.
I can run the docker-compose manually on my development machine and it works fine. If I manually delete the local images then it just re-downloads them, because they do exist.
This behavior started today. I installed portainer a few days ago and has been working fine. But, as of today, this error has started happening.
The environments are wildly different from each other, there are 3 of them, and this happens on all of them. If I ssh into the environments and manually pull the docker-compose from github, it runs just fine. This appears to be a problem with portainer.
This was occurring because the docker-compose pull policy was set to build.
pull-policy: build
Remove the pull policy altogether from the docker-compose and you're good to go.
I'm new to docker and have been dabbling with it for the past few days. I've managed to successfully use docker-compose for a multi-container deployment involving an app server (flask + gunicorn) and web server (nginx).
Now, I'd like to recreate the deployment on an offline machine. After doing research, it seems that most have mentioned use docker save and docker load to transfer over the base images. However, I'm wondering whether its possible to recreate the deployment from the image created by docker-compose build? Reason being I would like to skip the entire process of wheeling my python package dependencies for offline use, which I would have to do for the method starting from the base images.
I've tried to save that particular image (output of docker-compose build) and load it on the offline machine, and then tried docker run and docker-compose up but both don't seem to work. Would like to check with the community whether this method is even possible, and if so what's the right way to go about it?
Thanks!
To solve my issue, I ended up making an image of each individual container post pip install, then using docker-compose.yml simply to spin them up. As David mentioned, it doesn't seem possible to spin up the container from the single image output by docker-compose build.
Can be closed, not sure how to do it.
I am to be quite frank lost right now, the user whom published his source on github somehow failed to update the installation instructions when he released a new branch. Now, I am not dense, just uneducated when it comes to docker. I would really appreciate a push in the right direction. If I am missing any information from this post, please allow me to provide it in the comments.
Current Setup
O/S - Debian 8 Minimal (Latest kernel)
Hardware - 1GB VPS (KVM)
Docker - Installed with Compose (# docker info)
I am attempting to setup this (https://github.com/pboehm/ddns/tree/docker_and_rework), first I should clone this git to my working directory? Lets say /home for example. I will run the following command;
git clone -b docker_and_rework https://github.com/pboehm/ddns.git
Which has successfully cloned the source files into /home/ddns/... (working dir)
Now I believe I am supposed to go ahead and build something*, so I go into the following directory;
/home/ddns/docker
Inside contains a docker-compose.yml file, I am not sure what this does but by looking at it, it appears to be sending a bunch of instructions which I can only presume is to do with actually deploying or building the whole container/image or magical thing right? From here I go ahead and do the following;
docker-compose build
As we can see, I believe its building the container or image or whatever its called, you get my point (here). After a short while, that completes and we can see the following (docker images running). Which is correct, I see all of the dependencies in there, but things like;
go version
It does not show as a command, so I presume I need to run it inside the container maybe? If so I dont have a clue how, I need to run 'ddns.go' which is inside /home/ddns, the execution command is;
ddns --soa_fqdn=dns.stealthy.pro --domain=d.stealthy.pro backend
I am also curious why the front end web page is not showing? There should be a page like this;
http://ddns.pboehm.org/
But again, I believe there is some more to do I just do not know what??
docker-compose build will only build the images.
You need to run this. It will build and run them.
docker-compose up -d
The -d option runs containers in the background
To check if it's running after docker-compose up
docker-compose ps
It will show what is running and what ports are exposed from the container.
Usually you can access services from your localhost
If you want to have a look inside the container
docker-compose exec SERVICE /bin/bash
Where SERVICE is the name of the service in docker-compose.yml
The instructions it runs that you probably care about are in the Dockerfile, which for that repo is in the docker/ddns/ directory. What you're missing is that Dockerfile creates an image, which is a template to create an instance. Every time you docker run you'll create a new instance from the image. docker run docker_ddns go version will create a new instance of the image, run go version and output it, then die. Running long running processes like the docker_ddns-web image probably does will run the process until something kills that process. The reason you can't see the web page is probably because you haven't run docker-compose up yet, which will create linked instances of all of the docker images specified in the docker-compose.yml file. Hope this helps
I have the latest Docker for Mac installed, and I'm running into a problem where it appears that docker-compose up is stuck in a Downloading state for one of the containers:
± |master ✗| → docker-compose up --build
Pulling container (repo.io/company/container:prod)...
prod: Pulling from company/container
somehash: Already exists
somehash: Already exists
somehash: Already exists
somehash: Already exists
somehash: Pulling fs layer
somehash: Already exists
somehash: Already exists
somehash: Downloading [=================================================> ] 234.6 MB/239.3 MB
somehash: Download complete
somehash: Download complete
^^ this is literally what it looks like on my command line. Stopping and starting hasn't helped, it immediately outputs this same output.
I've tried to rm the container but I guess it doesn't yet exist, it returns the output No stopped containers. --force-recreate also gets stuck in the same place. And perhaps I'm not googling for the right terminology but I haven't found anything useful to try - any pointers?
I just needed to restart Docker.
Linux users can use sudo service docker restart.
Docker for Mac has a handy button for this in the Docker widget in the macOS toolbar:
If you happen to be using Docker Toolkit try docker-machine restart.
I faced the same problem! Restarting the service didn't help, downloading again didn't help. It used to get stuck at random instances leaving me with no option but to kill the pull request.
One thing which worked for me was to download 1 file at a time. For Ubuntu users, you can use the following steps:
Stop the service:
sudo service docker stop
Start docker with max concurrent download set as 1:
sudo dockerd --max-concurrent-downloads 1
Download the required image:
sudo docker pull <image_name>
Download images, after that stop the terminal and start the daemon again as it was earlier.
sudo service docker start
I had the similar situation this morning where my network suddenly went down and I was forced to power cycle the modern, while docker-compose was still in the middle of downloading stuff from docker hub.
Yes, bouncing the docker daemon process seems to resolve this.
For Linux users - do sudo service docker restart to fix it.
Go to the Docker Preferences from its menu bar icon. Within there is a "bug" icon. Click on that and then "clean / Purge data"
I'm running OSX and restarting Docker for Mac didn't help. Neither did a full restart or upgrading VirtualBox. What did work was turning my wifi interface on and off every time it got stuck. I had to do this repeatedly, but it eventually downloaded the entire image.
Directly download the necessary images using docker, e.g.
docker pull company/container
and then run
docker-compose up
again. Worked for me on MacOS.
I found a possible workaround.
I have my docker engine installed in a Ubuntu 18.04 Snap Environment.
I discovered searching in some forums that users relate this behaviors to limitation in the download bandwith.
So in the picture below you are going to watch that the components was stucked
Part of the Downloads stucked and finally I cancelled the process CTRL + C
I added two parameters or flags in the configuration file that controls the docker daemon behavior: max-concurrent-downloads 1 and max-concurrent-uploads 1
In my case remember, i am working in a snap environment. This file is located in this directory: /var/lib/docker/current/config/daemon.json
REMEMBER TO STOP ALL DOCKER PROCESS BEFORE THE FILE MODIFICATION, AND CREATE BACKUP OF THE FILE
Add the two lines in the picture. This is going to help you to limit the downloads to only one by one
This is the process that helped me to resolve this problem.
Download Succesfull
I had this issue in my VirtualBox when doing a docker pull on the image but it got stuck at a specific position and never moved from there. So, the issue was due to the network adapters in my VM. I was using NAT by default. When I switched it to "Bridged adapter", the issue went away.
I had a similar problem on docker for windows for a couple of days and when I tried to connect to the virtual machine (via Hyper-V Manager) the downloads started speeding along. I have no idea why but it worked for me...
Completely remove docker
Install docker again
It should work now
I tried to restar docker, update docker, but didnt help
Ideally everything will be sorted out with a Dockerfile and volumes, but sometimes that isn't practical or convenient.
For example, I found an image with Ghost already set up, and it seemed to work. So I added a few blog entries. Then I realized that I actually needed to modify the config.js to set up the mail.
So I stopped the container, committed, made some changes in bash, committed again, and then went to go start the container again running Ghost. But I had trouble getting it to work because the new image didn't have the configuration with the working directory and environment.
How can I copy the Docker container's configuration when I commit an image? Maybe I need to write a script that runs docker inspect on the container, pulls the config out, and then includes that in the docker commit command line?
This is a known issue: https://github.com/dotcloud/docker/issues/1141
The way you describe is still the best to achieve that I think, but I'd try using docker insert and see if that yield better results.