I have the latest portainer running to manage my images and containers in docker.
For cleaning up I want to have the command "docker images prune" to run daily.
For this I want to use Portainer host jobs, like it is shown here.
I did extaclty the same setting, same commands, but in my portinaer there is just 1 "created" conntainer of the ubuntu image and nothing happens.
Changing docker image from ubuntu:latest to ubuntu:18.04 fixed this issue for me
Related
I have created a container using the following command: docker container run -i ubuntu. However, when I try to run a command within the container, such as cd, I get the following error: bash: line 1: cd: $'bin\r': No such file or directory. What could be the issue?
When you docker run an image, or use an image in a Dockerfile FROM line, or name an image: in a Docker Compose setup, Docker first checks to see if you have that image locally. If you have that image, Docker just uses it without checking Docker Hub or the other upstream registry.
Meanwhile, you can docker build or docker tag an image with any name you want...even a name that matches an official Docker Hub image.
You mention in a comment that you at some point did run docker build -t ubuntu .... That replaces the ubuntu image with what you built, so when you later docker run ubuntu, it's running your modified image and not the official Docker Hub Ubuntu image.
This is straightforward to fix. If you
docker rmi ubuntu
it will delete your local (modified) copy, and the next time you use it, Docker will automatically pull it from Docker Hub. It should also work to
# Explicitly get the Docker Hub copy of the image
docker pull ubuntu
# Build a custom image, pulling whatever's in the FROM line
docker build --pull -t my/image .
(You can also hit this in a Docker Compose setup if you specify both image: and build:; this instructs Compose on an explicit name to use for the built image. You do not need to repeat the FROM line in image:, and it causes trouble if you do. The resolution is the same as described above. I might leave image: out entirely unless you're planning to push the image to a registry.)
I have a Jenkins running on Docker container and local docker registry.
docker-compose up
it does't go out side network rather pulls the image from local registry.
Is there a way i can update my local docker registry with latest Jenkins image?
And when i run docker-compose up i have latest Jenkins? Thank you!
So by default docker nature is always look for image available on host-machine and if not specific tag is provided it will search for default tag which is latest .
So in your case when you already have latest jenkins image available on host docker-compose will always use that image pretending this is the latest one. For using latest image available from registry you need to clean/delete jenkins image from your host who is having latest tag.
Delete and use latest jenkins image:
docker rmi -f jenkins:latest
docker-compose stop
docker-compose rm -f
docker-compose pull
docker-compose up -d
this is your private registry so you need to login to it:
docker login my-server.test:5000
I have a docker compose file with a container that uses the latest tag:
code_site:
image: code_site:latest
deploy:
restart_policy:
condition: any
volumes:
- ../../data_to_backup/code_site/drupal_sites:/drupal_www/sites
- drupal_core:/drupal_www/core
- php_fpm_socket:/var/run/php-fpm7
networks:
- main_net
I have a process which will rebuild the container. I use it when I want to make changes to my site.
I am investigating a problem where the docker stack deploy command:
docker stack deploy --compose-file=docker-compose.yml code_site
(The stack name happens to match the image name but this is a coincidence.)
If I go through the following process:
Delete the code_site:latest image (rmi code_site:latest)
Rebuild a new code_site:latest image
Redeploy the stack
It will bring up the OLD version of the container. This is confusing, especially as I have deleted the old version.
I have gone further and I deleted the code_site image then I ran the stack deploy command.
The stack deploys successfully still running the old version of the container.
I can use the docker images command and verify that there is no container named code_site:latest so I have no idea how the stack could possibly deploy.
Can anyone explain how the image is coming back from the dead, and what method I should use to get rid of it permanently and force docker stack to use the real image?
Thanks
Robert
Update 1
code_site is a locally built image
I am running on a swarm but there is only one node in the swarm
Docker stack deploy will pull the latest image from your docker registry, since '--resolve-image always' is set by default, therefore always resolving to the latest image. If you don't want this run
docker stack deploy --resolve-image never [rest of deploy command]
However, to make it easier to maintain changes, I would suggest using version tags for your images in your registry, such as code_site:v1 when code changes push new version tagged code_site:v2 and deploy the new image/version using the normal deploy command without --resolve-image never.
Also if you plan to add nodes to your swarm you will need to change your command to docker stack deploy --with-registry-auth to allow the other nodes to pull the image from your repo.
Update 1
If you are 100% sure you do not have an image in your docker registry with the name code_site:latest then
this should work
Run:
docker rm $(docker ps -aq)
docker volume rm $(docker volume ls --format {{.ID}})
To check for lingering containers/services:
List Existing Services
docker service ls
List Running Stacks
docker stack ls
List All Containers
docker ps -aq
Then redeploy with deploy command
Alternatively to update your service without removing old containers/volumes/images, you can just update your image, then update your service without removing anything.
This will update your service using the new image... no need to stop, remove, then update... Just update.
Docker Service Update Command
Run after new image is built:
docker service update [SERVICE NAME] --image [IMAGE NAME] --force
I have created a gitlab runner.
I have choosen docker executor and an ubuntu default image.
I have put this at the top of my .gitlab-ci.yml file:
image: microsoft/dotnet:latest
I was thinking that gitlab-ci will load ubuntu image by default if there are no "images" directive in .gitlab-ci.yml file.
But, there is something strange: I am wondering now if gitlab-ci is not creating an ubuntu container and then creating a dotnet container inside the ubuntu container.
Here is a very ugly test i have done on gitlab server: I have removed /usr/bin/docker file and i have replaced it by a script which logs arguments.
This is very strange because jobs still working and i have nothing in my log file....
Thanks
Ubuntu image is indeed used if you didn't specify image but you did and your jobs should be run on the dotnet container without ever spinning up the ubuntu.
Your test behaves the way it does because docker is the client while dockerd is the deamon that gitlab runner actually calls.
If you want to check what's going on you should rather call docker ps to get a list of running containers.
This is probably a duplicate, but all answers that I saw didn't work for me.
I'm using docker (17.06.2-ce), docker-compose (1.16.1).
I have an image of solr which I use for development and testing purposes (and on CI too).
When making changes to the image I need to rebuild the image and recreate containers, so that the containers use the latest possible image, which, in turn, takes the latest possible code from the local repo.
I've created my own image which is based on official solr-docker image. The repo is a folder with additional steps that I'm applying to the image, such as copying files and making changes to existing configs using sed.
I'm working in the repo and have the containers running in the background.
When I need to refresh the containers, I usually do these commands
sudo docker-compose stop
sudo docker rm $(sudo docker ps -a -q)
sudo docker rmi $(sudo docker images -q)
sudo docker-compose up
The above 4 commands is the only way it works for me. All other approaches that I've tried din't rebuild the images and didn't create the containers based off the new, rebuilt images. In other words, the code in the image would be stale.
Questions:
Is it possible to refresh the image + rebuild the container using fewer commands?
Every time I'm running above 4 commands, docker would download ~500MB of dependencies. Is it possible to not to download them and just rebuild the image using updated local code and existing cached dependencies?
I usually do docker-compose rm && docker-compose build && docker-compose up for recreating docker containers: it won't download 500mb.
You can use docker-compose down which does the following:
down Stop and remove containers, networks, images, and volumes
Therefore the command to use will be: docker-compose down --rmi local && docker-compose up
The --rmi local option will remove the built image, and thus forcing a rebuild on up