How to get the creation date of a docker volume without using the docker gui for windows. With debian linux there is no gui for that.
In VS Code with docker extension there is also no way to see the creation date.
with inspect it is possible but if i have many volumes with cryptic names it is hard to determine which one was created last
is there a convienient way with linux terminal to list those date sorted?
i tried inspect ---> docker volume inspect
You could use the jq command to extract the informaiton you want from docker volume inspect:
docker volume ls --format '{{ .Name }}' |
xargs -n1 docker volume inspect |
jq -r '.[0]|[.Name, .CreatedAt]|#tsv' |
sort -k2
Which on my system produces something like:
exvpn_ssh_data 2022-10-30T22:40:34-04:00
exvpn_ssh_hostkeys 2022-10-30T23:04:21-04:00
exvpn_vpn_status 2022-10-31T23:18:20-04:00
postfix_mailboxes 2022-12-18T11:02:04-05:00
postfix_postgres_data 2022-12-18T11:02:04-05:00
postfix_greylist_data 2022-12-18T11:02:05-05:00
postfix_postfix_spool 2022-12-18T11:02:05-05:00
postfix_postfix_data 2022-12-18T11:02:07-05:00
postfix_postfix_config 2022-12-18T11:02:07-05:00
postfix_sockets 2022-12-18T19:46:59-05:00
Note that we're sorting things lexically, but because of the way the dates are written that ends up also being a chronological sort.
How do you get the digest of a container image running on a pod in kubernetes?
Based on the screen-shot below, I would like to be able to retrieve d976aea36eb5 from the pod (logs, YAML etc. whatever is the way to get it)
What I can get from YAML://Deployment/spec/template/spec/containers/image is mysolution.host which is the common name of the image.
If this isn't possible via the kubernetes API, you can do it through the docker registry API.
What you're looking for is the image's digest, which is the sha256 hash of its manifest. The "Name" column in the screenshot of GCR's UI is the truncated digest of the image.
The string us.gcr.io/my-project-37111/mysolution.host represents a repository, which is just a collection of images. These images can be referenced by their digest or by a tag.
You can list all the tags in your repository using gcloud:
$ gcloud container images list-tags us.gcr.io/my-project-37111/mysolution.host
That will show you the truncated digest as well. For the full digest, you can use the --format=json flag:
$ gcloud container images list-tags --format=json us.gcr.io/my-project-37111/mysolution.host
If you happen to know the tag (0.0.5-linux for the highlighted image), you can call the registry API directly:
$ curl \
-H "Accept: *" \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-I https://us.gcr.io/v2/my-project-37111/mysolution.host/manifests/0.0.5-linux |
grep "digest"
With the new version of Rancher is it possible to tell docker SWARM (1.12+) to redistribute containers when I add a new node in my infrastructure?
Suppose I have 4 nodes with 5 containers on each, if I add a 5th node, I'd like to redistribute my containers to have 4 of them on each node.
When a node crashes or it shuts down (scaling down my cluster), the re-scheduling triggers well, but when I scale up by adding 1 or more nodes, nothing happens.
This is not currently possible to do this. What you can do, is update a service with docker service update (i.e.: by adding an environment variable)
A new feature coming in docker 1.13 will be a force update of services, that will update the service and force the redistribution of nodes, so something like docker service update --force $(docker service ls -q) might be possible (haven't tried this yet, so can't confirm yet).
You can find more info about this feature in this blogpost
I was having the exact same issue. But rather than because of adding a new node, I had a node failure on the underling shared storage between nodes (was using shared NFS storage for sharing mount points of read only configs).
The docker service update --force $(docker service ls -q) as of version Docker version 17.05.0-ce, build 89658be does not work.
scale-up.sh:
#!/bin/bash
echo "Enter the amount by which you want to scale your services (NUMBER), followed by ENTER: "
read SCALENUM
for OUTPUT in $(docker service ls | awk '{print $2}' | sed -n '1!p')
do
echo "Scaling up "$OUTPUT" to "$SCALENUM
docker service scale $OUTPUT=$SCALENUM
done
scale-down.sh:
#!/bin/bash
for OUTPUT in $(docker service ls | awk '{print $2}' | sed -n '1!p')
do
echo "Scaling down "$OUTPUT" to 0"
docker service scale $OUTPUT=0
done
Note that the second script SCALES DOWN the service, making it unavailable. You can also use the following command as a starting point for other scripting you may need, as this prints out the service name independently of the other columns in a typical docker service ls command:
$(docker service ls | awk '{print $2}' | sed -n '1!p')
I hope this helps!
I'm planning to use docker in a restricted internet access environment controlled by squid proxy...
And i can't find a way to retrieve urls used by docker under the hood when pulling image.
Could you please help me finding these url in order to add rules for docker repositories
Guess it's quite difficult to find out the exact URLs that are used when Docker is performing the image pull. But at least there is workaround that can give you the list of external servers the Docker interacts with:
# Console #1
sudo tcpdump | grep http | awk '{ gsub(":",""); print $3 "\n" $5 }' | grep -v $YOUR_OWN_FQDN > servers 2&>1
# Console #2
docker pull debian
# Console #1
sed -e 's/\.http\(s\)\?//g' servers | sort -u
I've end up with this list (unfortunately I'm note sure if it's consistent or region-independent):
104.16.105.85
ec2-54-152-161-54.compute-1.amazonaws.com
ec2-54-208-130-47.compute-1.amazonaws.com
ec2-54-208-162-63.compute-1.amazonaws.com
server-205-251-219-168.arn1.r.cloudfront.net
server-205-251-219-226.arn1.r.cloudfront.net
What CLI commands do I need to use in order to check if the image in my private docker registry is a newer version than the one currently running on my server?
E.g. I have a container that I ran using docker run -d my.domain.com:5000/project1
and I would like to know if it is out-of-date.
Brownie points to #mbarthelemy and #amuino who put me on track. From that I was able to come up with the following bash script that others may find useful. It just checks if the tag on the registry is different from the currently executing container.
#!/bin/bash
# ensure running bash
if ! [ -n "$BASH_VERSION" ];then
echo "this is not bash, calling self with bash....";
SCRIPT=$(readlink -f "$0")
/bin/bash $SCRIPT
exit;
fi
REGISTRY="my.registry.com:5000"
REPOSITORY="awesome-project-of-awesomeness"
LATEST="`wget -qO- http://$REGISTRY/v1/repositories/$REPOSITORY/tags`"
LATEST=`echo $LATEST | sed "s/{//g" | sed "s/}//g" | sed "s/\"//g" | cut -d ' ' -f2`
RUNNING=`docker inspect "$REGISTRY/$REPOSITORY" | grep Id | sed "s/\"//g" | sed "s/,//g" | tr -s ' ' | cut -d ' ' -f3`
if [ "$RUNNING" == "$LATEST" ];then
echo "same, do nothing"
else
echo "update!"
echo "$RUNNING != $LATEST"
fi
Even when there is no command, you can use the API to check for tags on the registry and compare against what you are running.
$ curl --silent my.domain.com:5000/v1/repositories//project1/tags | grep latest
{"latest": "116f283e4f19716a07bbf48a562588d58ec107fe6e9af979a5b1ceac299c4370"}
$ docker images --no-trunc my.domain.com:5000/project1
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
my.domain.com:5000 latest 64d935ffade6ed1cca3de1b484549d4d278a5ca62b31165e36e72c3e6ab8a30f 4 days ago 583.2 MB
By comparing the ids, you can know that you are not running the latest version.
Not sure about the version but if you mean the tag of image, it can be easily checked through the registry v2 api . Note that in context of docker images tag has nothing to do with the version of software.
Use curl command in CLI
curl <docker_host_ip>:<docker_host_port>/v2/<repository_name>/tags/list
To get a list of repositories pushed on the private registry, use
curl <docker_host_ip>:<docker_host_port>/v2/_catalog
AFAIK, this is not possible right now.
The only thing I see would be to pull the registry to check if there is a new version of your image (would then have a different ID than your locally stored image):
docker pull your/image:tag
But yes, that would mean fetching the new images (if any).
If you have a look at the registry API documentation, you'll see that if you don't mind scripting a bit, you could get this information without actually downloading the image, by fetching the image tags and check if the returned ID for the tag matches the ID of the local image you have with the same tag.
That being said, having something to "check for updates" integrated into the docker CLI would be a nice addition.
I don't know if this works as advertised. Just a quick hack I just put together.
But this will at least give you a little push on how this might be done.
#!/bin/bash
container=$1
imageid=$(docker inspect --format '{{.Config.Image}}' ${container})
echo "Running version from: $(docker inspect --format '{{.Created}}' ${container})"
echo "Image version from: $(docker inspect --format '{{.Created}}' ${imageid})"
Example output:
[root#server ~]# sh version_check.sh 9e500019b9d4
Running version from: 2014-05-30T08:24:08.761178656Z
Image version from: 2014-05-01T16:48:24.163628504Z
You can use a bash script running in a cron scheduled task:
#!/bin/bash
docker_instance='YOUR_RUNNING_INSTANCE'
instance_id=$(docker ps -qa --filter name=$docker_instance)
image_name_tag=$(docker inspect $instance_id | jq -r [] |.Config.Image')
if [ "-${image_name_tag}-" != "--" ]; then
status=$(docker pull $image_name_tag | grep "Downloaded newer image")
if [ "-${status}-" != "--" ]; then
echo ">>> There is one update for this image ... "
# stop the docker instance
docker stop $docker_instance
# remove the docker instance
docker rm $docker_instance
# restart the docker using the last command, using the new image from the remote repository
run-my-docker-instance.sh
fi
fi
An older question, but this sounds like a problem that Watchtower can solve for you. It is another dockerized application that runs adjacent to your other containers and periodically check to see if their base images are updated. When they are, it downloads the new image and restarts them.
If given the correct credentials, it can work with a local registry.
FWIW I solved it with the bash script below for a while until I decided that Watchtower was the easier way to go (by the way: note the maintainer switched from v2tec to containrrr a while ago, the v2tec one isn't getting updates anymore). Watchtower gave me an easy way to schedule things without having to rely on cron (which gets blown away in a reinstall - granted, you could have something like Ansible recreate that for you, but this was easier for me). It also adds easy notifications (I like using Telegram) for updates, which I appreciate knowing about so that if something goes sideways at least I know there was an update that could be to blame.
I'm not saying this is will never cause issues, but I've been running Watchtower on various Docker hosts (3 of them, 2 in my homelab, one on Linode) for about a year now and I have yet to have an issue with it. I prefer this to having to manually update my containers on a regular basis. For me the risk of something getting screwed up is lower than the risks of running outdated containers, so this is what I chose for myself. YMMV.
I honestly don't get the apparent hate for automated update solutions like Watchtower - I see so many comments saying that you shouldn't use automated updates because they'll cause problems... I don't know what folks have been burned by - would love to hear more about where this caused problems for you! I mean that, I genuinely don't understand and would love to learn more. I keep having some vague unease about doing automated updates, but given my experience so far I can honestly only recommend it. I used to use Diun for getting notified about updates and then would go and manually update my containers. That got real old after it became a daily chore! (With ~45 different containers running, you can pretty much guarantee that at least one of them will have an update every single day.)
If I really need to keep a container from updating, I can always stick a com.centurylinklabs.watchtower.enable=false label on the container. Or you can whitelist only the containers you want automatically updated, or... There are loads of possibilities with Watchtower.
However, for reference if you still want to use it, see my script below. I used docker-compose pull to get the latest version - it does a check first to see if there is a new image, so doesn't waste a whole lot of bandwidth if there is nothing to update. It's effectively like doing the curl you guys used. Also I prefer the docker inspect -f commands to check the versions to the solutions that pipe through grep, sed, and co. since that is less likely to get broken by changes to docker inspect output format.
#!/usr/bin/env bash
cd /directory/with/docker-compose.yml/
image_name=your-awesome-image
docker-compose pull
container_version=$(docker inspect -f '{{ index .Config.Labels "org.opencontainers.image.version" }}' "$image_name")
latest_image_version=$(docker inspect -f '{{ index .Config.Labels "org.opencontainers.image.version" }}' "$image_name")
if [[ "$container_version" != "$latest_image_version" ]]; then
echo "Upgrading ${image_name} from ${container_version} to ${latest_image_version}"
docker-compose down
docker-compose up -d
fi