"docker stack deploy": where are docker images? - docker

I created a docker compose *.yml file where I have many services with specified image tags. Then I let docker deploy a stack for me (on my local machine) with docker stack deploy -c .\my-compose-file.yml --with-registry-auth dev and it is able to run all services. When I have docker events running simultaneously, I can see image pull messages in the log, so docker pulls missing images. But when I run docker image ls -a, the pulled images are not displayed here.
So I wondering and want to know, what live cycle do the downloaded images have (will then be removed from my drive when I do docker stack rm or not), and when not, how do I clean up such images?

I assume you have multi-node swarm configured. In such cases, the docker image ls is running in your local machine only, while the containers from the stack are distributed across nodes. The images are pulled on the nodes that will run the container.
To get the list of the containers, you will need to go to each member of the swarm and issue the command. Easy way to do it is to use, assuming you have ssh access to the nodes with identity key:
docker node ls | cut -c 31-49 | grep -v HOSTNAME | xargs -I"SERVER" sh -c "echo SERVER; ssh SERVER /usr/local/bin/docker image ls -a"

Related

How to setup a pull-through local registry that persists images on local machine? (for slow internet access)

How to create a local-registry container, that mounts a volume from the host machine and persist locally all the images that get pulled?
I want to not download images more than once, if not necessary, even after the registry (or the whole Docker VM) is being thrown away and recreated.
This is useful when having slow connection or no connectivity. Would also allow to mount a backup with pre-downloaded images, as docker volume, skipping altogether the need for an internet connection.
This latter is already possible, but it would be more convenient than having to manually docker push/docker pull onto the local registry, or to docker save/docker load each image that need to be available there.
It's a rephrasing on this, that wasn't reopened because of lack of feedback. Main purpose is to make the answer available for search, but feel free to propose better solutions.
Here are the step-by-step instructions. Hopefully will save time & make life easier to somebody else, travelling or living in disadvantaged areas of the world where internet connections can't access the Docker world, because they are too limited or sometime absent altogether!
Istructions are for macOS and Minikube but can be adapted also for VM running on Windows or via Docker Desktop.
(note: you will need to check if your virtualization technology provides automount of the system user directory)
Configuration
Define first your environment variables with the desired values. See env-vars in the code below (PROXIED_REGISTRY, REGISTRY_USERNAME, REGISTRY_PASSWORD, PATH_WHERE_TO_PERSIST_IMAGES, etc.)
On the host machine
Minikube
If using minikube, first bind to docker on its VM's
eval $(minikube docker-env)
or run the commands directly from inside the VM, via minikube ssh.
Create local registry
(note: some envs might be unnecessary; check Docker docs to see what you need)
The -v option mounts onto the local registry the path where you want to persist the registry data (repositories folders and image layers).
When you use Minikube, this latter will automatically mount the home folder from the host (/Users/, on macOS) onto the virtual machine where Docker is run.
docker run -d -p 5000:5000 \
-e STANDALONE=false \
-e "REGISTRY_LOG_LEVEL=debug" \
-e "REGISTRY_REDIRECT_DISABLE=true" \
-e MIRROR_SOURCE="https://${PROXIED_REGISTRY}" \
-e REGISTRY_PROXY_REMOTEURL="https://${PROXIED_REGISTRY}" \
-e REGISTRY_PROXY_USERNAME="${REGISTRY_USER}" \
-e REGISTRY_PROXY_PASSWORD="${REGISTRY_PASSWORD}" \
-v /Users/${MACOS_USERNAME}/${PATH_WHERE_TO_PERSIST_IMAGES}/docker/registry:/var/lib/registry \
--restart=always \
--name local-registry \
registry:2
Login to your local registry
echo -n "${REGISTRY_PASSWORD}" | docker login -u "${REGISTRY_USER}" --password-stdin "localhost:5000"
(optional) Verify that the persist directories are present
docker exec registry ls -la /var/lib/registry/docker/registry
ll /Users/${MACOS_USERNAME}/${PATH_WHERE_TO_PERSIST_IMAGES}/docker/registry/docker/registry
Try to pull one image from your private registry
(to see it proxied through the repository localhost:5000)
docker pull localhost:5000/${REPOSITORY}/${IMAGE}:${IMAGE_TAG}
(optional) Verify the image data has been synced on local host, where desired
docker exec registry ls -la /var/lib/registry/docker/registry
ll /Users/${MACOS_USERNAME}/${PATH_WHERE_TO_PERSIST_IMAGES}/docker/registry/docker/registry
If using Kubernetes
change the deployment spec container image to:
localhost:5000/${REPOSITORY}/${IMAGE}:${IMAGE_TAG}
Et voila!
You now can keep the images downloaded from your repository stored onto your host machine!
If internet is available, the local registry will ensure to have the most recent version of your pulled images, requesting it to the proxied registry (private, or the the Docker hub).
And you will have a last resort backup to run your container also when your internet connection is too slow for re-downloading everything you need, or is unavailable altogether!
(really useful with Minikube, when you need to destroy your docker virtual machine)
References:
https://docs.docker.com/registry/recipes/mirror/#run-a-registry-as-a-pull-through-cache
https://minikube.sigs.k8s.io/docs/handbook/mount/#driver-mounts

How to locally backup the images of a local Docker-registry?

How to create a local-registry container that mounts a volume from the host machine and persist locally all the images that get pulled?
Local Docker registry with persisted images
It should be possible to have an ephemeral registry container (and its docker volume), allowing to not download images more than once, even after the registry (or the whole Docker VM) is being throw away and recreated.
This would allow to pull just once the images, having them available when internet connectivity isn't good (or available at all); would allow also to mount a docker volume with pre-downloaded images.
It would be more convenient than having to manually docker push/docker pull onto the local registry, or to docker save/docker load each image that need to be available there.
Notes:
destination of the mount should probably be /var/lib/registry/docker/registry.
it is possible to configure a local Docker registry as a pull-through cache.
my specific setup runs docker via minikube, on macOS; but the answer doesn't have to be specific to it.
I managed it, here are the step-by-step instructions. Hopefully will make life easier to somebody else!
Configuration
Define first your environment variables with the desired values. See env-vars in the code below (PROXIED_REGISTRY, REGISTRY_USERNAME, REGISTRY_PASSWORD, PATH_WHERE_TO_PERSIST_IMAGES, etc.)
On the host machine
Minikube
If using minikube, first bind to docker on its VM's
eval $(minikube docker-env)
or run the commands directly from inside the VM, via minikube ssh.
Create local registry
(note: some envs might be unnecessary; check Docker docs to see what you need)
The -v option mounts onto the local registry the path where you want to persist the registry data (repositories folders and image layers).
When you use Minikube, this latter will automatically mount the home folder from the host (/Users/, on macOS) onto the virtual machine where Docker is run.
docker run -d -p 5000:5000 \
-e STANDALONE=false \
-e "REGISTRY_LOG_LEVEL=debug" \
-e "REGISTRY_REDIRECT_DISABLE=true" \
-e MIRROR_SOURCE="https://${PROXIED_REGISTRY}" \
-e REGISTRY_PROXY_REMOTEURL="https://${PROXIED_REGISTRY}" \
-e REGISTRY_PROXY_USERNAME="${REGISTRY_USER}" \
-e REGISTRY_PROXY_PASSWORD="${REGISTRY_PASSWORD}" \
-v /Users/${MACOS_USERNAME}/${PATH_WHERE_TO_PERSIST_IMAGES}/docker/registry:/var/lib/registry \
--restart=always \
--name local-registry \
registry:2
Login to your local registry
echo -n "${REGISTRY_PASSWORD}" | docker login -u "${REGISTRY_USER}" --password-stdin "localhost:5000"
(optional) Verify that the persist directories are present
docker exec registry ls -la /var/lib/registry/docker/registry
ll /Users/${MACOS_USERNAME}/${PATH_WHERE_TO_PERSIST_IMAGES}/docker/registry/docker/registry
Try to pull one image from your private registry
(to see it proxied through the repository localhost:5000)
docker pull localhost:5000/${REPOSITORY}/${IMAGE}:${IMAGE_TAG}
(optional) Verify the image data has been synced on local host, where desired
docker exec registry ls -la /var/lib/registry/docker/registry
ll /Users/${MACOS_USERNAME}/${PATH_WHERE_TO_PERSIST_IMAGES}/docker/registry/docker/registry
If using Kubernetes
change the deployment spec container image to:
localhost:5000/${REPOSITORY}/${IMAGE}:${IMAGE_TAG}
Et voila!
You now can keep the images downloaded from your repository stored onto your host machine!
If internet is available, the local registry will ensure to have the most recent version of your pulled images, requesting it to the proxied registry (private, or the the Docker hub).
And you will have a last resort backup to run your container also when your internet connection is too slow for re-downloading everything you need, or is unavailable altogether!
(really useful with Minikube, when you need to destroy your docker virtual machine)
References:
https://docs.docker.com/registry/recipes/mirror/#run-a-registry-as-a-pull-through-cache
https://minikube.sigs.k8s.io/docs/handbook/mount/#driver-mounts

How can I see which user launched a Docker container?

I can view the list of running containers with docker ps or equivalently docker container ls (added in Docker 1.13). However, it doesn't display the user who launched each Docker container. How can I see which user launched a Docker container? Ideally I would prefer to have the list of running containers along with the user for launched each of them.
You can try this;
docker inspect $(docker ps -q) --format '{{.Config.User}} {{.Name}}'
Edit: Container name added to output
There's no built in way to do this.
You can check the user that the application inside the container is configured to run as by inspecting the container for the .Config.User field, and if it's blank the default is uid 0 (root). But this doesn't tell you who ran the docker command that started the container. User bob with access to docker can run a container as any uid (this is the docker run -u 1234 some-image option to run as uid 1234). Most images that haven't been hardened will default to running as root no matter the user that starts the container.
To understand why, realize that docker is a client/server app, and the server can receive connections in different ways. By default, this server is running as root, and users can submit requests with any configuration. These requests may be over a unix socket, you could sudo to root to connect to that socket, you could expose the API to the network (not recommended), or you may have another layer of tooling on top of docker (e.g. Kubernetes with the docker-shim). The big issue in that list is the difference between the network requests vs a unix socket, because network requests don't tell you who's running on the remote host, and if it did, you'd be trusting that remote client to provide accurate information. And since the API is documented, anyone with a curl command could submit a request claiming to be a different user.
In short, every user with access to the docker API is an anonymized root user on your host.
The closest you can get is to either place something in front of docker that authenticates users and populates something like a label. Or trust users to populate that label and be honest (because there's nothing in docker validating these settings).
$ docker run -l "user=$(id -u)" -d --rm --name test-label busybox tail -f /dev/null
...
$ docker container inspect test-label --format '{{ .Config.Labels.user }}'
1000
Beyond that, if you have a deployed container, sometimes you can infer the user by looking through the configuration and finding volume mappings back to that user's home directory. That gives you a strong likelihood, but again, not a guarantee since any user can set any volume.
I found a solution. It is not perfect, but it works for me.
I start all my containers with an environment variable ($CONTAINER_OWNER in my case) which includes the user. Then, I can list the containers with the environment variable.
Start container with environment variable
docker run -e CONTAINER_OWNER=$(whoami) MY_CONTAINER
Start docker compose with environment variable
echo "CONTAINER_OWNER=$(whoami)" > deployment.env # Create env file
docker-compose --env-file deployment.env up
List containers with the environment variable
for container_id in $(docker container ls -q); do
echo $container_id $(docker exec $container_id bash -c 'echo "$CONTAINER_OWNER"')
done
As far as I know, docker inspect will show only the configuration that
the container started with.
Because of the fact that commands like entrypoint (or any init script) might change the user, those changes will not be reflected on the docker inspect output.
In order to work around this, you can to overwrite the default entrypoint set by the image with --entrypoint="" and specify a command like whoami or id after it.
You asked specifically to see all the containers running and the launched user, so this solution is only partial and gives you the user in case it doesn't appear with the docker inspect command:
docker run --entrypoint "" <image-name> whoami
Maybe somebody will proceed from this point to a full solution (:
Read more about entrypoint "" in here.
If you are used to ps command, running ps on the Docker host and grep with parts of the process your process is running. For example, if you have a Tomcat container running, you may run the following command to get details on which user would have started the container.
ps -u | grep tomcat
This is possible because containers are nothing but processes managed by docker. However, this will only work on single host. Docker provides alternatives to get container details as mentioned in other answer.
this command will print the uid and gid
docker exec <CONTAINER_ID> id
ps -aux | less
Find the process's name (the one running inside the container) in the list (last column) and you will see the user ran it in the first column

How to know which command or compose file has been used to start Docker containers?

Is there any way to find a source of the docker container script? I have a setup where I can not find any docker-compose.yml file nor the bash script etc that would have run all the Docker containers currently running. I have a virtual machine that starts docker containers on the startup, but have no idea which file is actually run.
i think no option to know which docker-compose file is use.
but you can check manual every you project folder.
the docker-compose mechanism is by matching the docker-compose.yml file. so if you run command sudo docker-compose ps in every your project folder. docker-compose will match between the docker-compose file used by container and docker-compose file in your project, if the same than the results will be displayed, if not the results is not displayed
If the containers are running automatically on reboot and you have no cron/bash profile/rc.local or any other startup screen then that may mean that they are containers with --restart option set. You can change that by running below command
docker ps -q | xargs docker update --restart no
docker ps -q | xargs docker stop
Then restart the machine. The containers should not start. If they do then you have some script somewhere which is starting them

How to list the published container images in the Google Container Registry using gcloud or another CLI

Is there a gcloud API or other command line interface (CLI) to access the list of published container images in the private Google Container Registry? (That is the container registry inside a Google Cloud Platform project)
gcloud container does not seem to help:
$ gcloud container
Usage: gcloud container [optional flags] <group | command>
group may be clusters | operations
command may be get-server-config
Deploy and manage clusters of machines for running containers.
flags:
--zone ZONE, -z ZONE The compute zone (e.g. us-central1-a) for the cluster
global flags:
Run `gcloud -h` for a description of flags available to all commands.
command groups:
clusters Deploy and teardown Google Container Engine clusters.
operations Get and list operations for Google Container Engine
clusters.
commands:
get-server-config Get Container Engine server config.
I also don't want to use gcloud docker to list images because this wants to connect to a particular docker daemon that I don't have. Unless there is a way to tell gcloud docker to connect to a remote public docker daemon that can read the private containers pushed to the registry through my project.
We just released a new command to list the images in your repository! You can try it out with:
gcloud alpha container images list --repository=gcr.io/$MYREPOSITORY
If you want to see the specific tags for an image you can use:
gcloud alpha container images list-tags gcr.io/$MYREPOSITORY/$MYIMAGE
The answer given by Robert Bailey is good for certain tasks, but might be missing what you specifically want to do. Nonetheless, your comments in reply to his answer are not so much faults of his answer as of your own understanding of what the commands which "fail" actually mean to do.
As far as your second comment,
Using docker I get the following error (for the reasons mentioned
above; I also edited the question): Cannot connect to the Docker daemon. Is the docker daemon running on this host?
This is a result of the docker daemon not running. Check if it's running via ps aux | grep docker. You can refer to the Docker documentation to determine how to properly install and run it.
As far as your first comment,
Using curl I get: {"errors":[{"code":"DENIED","message":"Failed to read tags for repository '<my_project>/<my_image>'"}]}. I have to
authenticate somehow to access the images in a private registry. I
don't want to use docker because that means I have to have a docker
daemon available. I only want to see if a container image with a
particular version is in the Container Registry. So what I need is an
API to the Container Registry in the Google Developer Console.
You wouldn't be able to curl the image unless it was public, as mentioned in Robert's latest comment, or unless you somehow provided some great oauth headers during the curl's invocation.
You should use gcloud docker to attempt to list the images in the registry, as you would for other docker registries. The gcloud container command group is the wrong one for your desired task. You can see below an output from gcloud version 96.0.0 (latest as of this comment) for the docker command group:
$ gcloud docker
Usage: docker [OPTIONS] COMMAND [arg...]
docker daemon [ --help | ... ]
docker [ --help | -v | --version ]
A self-sufficient runtime for containers.
Options:
--config=~/.docker Location of client config files
-D, --debug=false Enable debug mode
--disable-legacy-registry=false Do not contact legacy registries
-H, --host=[] Daemon socket(s) to connect to
-h, --help=false Print usage
-l, --log-level=info Set the logging level
--tls=false Use TLS; implied by --tlsverify
--tlscacert=~/.docker/ca.pem Trust certs signed only by this CA
--tlscert=~/.docker/cert.pem Path to TLS certificate file
--tlskey=~/.docker/key.pem Path to TLS key file
--tlsverify=false Use TLS and verify the remote
-v, --version=false Print version information and quit
Commands:
attach Attach to a running container
build Build an image from a Dockerfile
commit Create a new image from a container's changes
cp Copy files/folders between a container and the local filesystem
create Create a new container
diff Inspect changes on a container's filesystem
events Get real time events from the server
exec Run a command in a running container
export Export a container's filesystem as a tar archive
history Show the history of an image
images List images
import Import the contents from a tarball to create a filesystem image
info Display system-wide information
inspect Return low-level information on a container or image
kill Kill a running container
load Load an image from a tar archive or STDIN
login Register or log in to a Docker registry
logout Log out from a Docker registry
logs Fetch the logs of a container
network Manage Docker networks
pause Pause all processes within a container
port List port mappings or a specific mapping for the CONTAINER
ps List containers
pull Pull an image or a repository from a registry
push Push an image or a repository to a registry
rename Rename a container
restart Restart a container
rm Remove one or more containers
rmi Remove one or more images
run Run a command in a new container
save Save an image(s) to a tar archive
search Search the Docker Hub for images
start Start one or more stopped containers
stats Display a live stream of container(s) resource usage statistics
stop Stop a running container
tag Tag an image into a repository
top Display the running processes of a container
unpause Unpause all processes within a container
version Show the Docker version information
volume Manage Docker volumes
wait Block until a container stops, then print its exit code
Run 'docker COMMAND --help' for more information on a command.
You should use gcloud docker search gcr.io/project-id to check which images are in the repository. gcloud has your credentials, so it can talk to the private registry as long as you're authenticated as an appropriate user on the project.
Finally, as an added resource: The Cloud Platform docs have a whole article about working with Google Container Registry.
If you know the project that is hosting the images (e.g. google-containers) you can list images with
gcloud docker search gcr.io/google_containers
For an individual image (e.g. the pause image in the google-containers project), you can check the versions with
curl https://gcr.io/v2/google-containers/pause/tags/list
I've just found a far simpler way to check for specific images. Once you have authenticated gcloud, use it to generate access tokens for reading from your private registry:
curl -u "oauth2accesstoken:$(gcloud auth print-access-token)" https://gcr.io/v2/<projectName>/<imageName>/tags/list
My best solution so far without having a local docker available and without being able to connect to a remote docker (this would still require at least the local docker client but not the local daemon running), is to SSH into a Container Cluster instance that runs docker and have my search done there and getting the result in my original script:
gcloud compute ssh <container_cluster_instance> -C "sudo gcloud docker search ..."
Of course, to avoid all verbose output (like SSH/Terminal welcome messages) I use some arguments to silent the execution a bit:
gcloud compute ssh --ssh-flag="-q" "$INSTANCE_NAME" -o LogLevel=quiet -C "sudo gcloud docker search ..."

Resources