Use docker in restricted internet environnement - docker

I'm planning to use docker in a restricted internet access environment controlled by squid proxy...
And i can't find a way to retrieve urls used by docker under the hood when pulling image.
Could you please help me finding these url in order to add rules for docker repositories

Guess it's quite difficult to find out the exact URLs that are used when Docker is performing the image pull. But at least there is workaround that can give you the list of external servers the Docker interacts with:
# Console #1
sudo tcpdump | grep http | awk '{ gsub(":",""); print $3 "\n" $5 }' | grep -v $YOUR_OWN_FQDN > servers 2&>1
# Console #2
docker pull debian
# Console #1
sed -e 's/\.http\(s\)\?//g' servers | sort -u
I've end up with this list (unfortunately I'm note sure if it's consistent or region-independent):
104.16.105.85
ec2-54-152-161-54.compute-1.amazonaws.com
ec2-54-208-130-47.compute-1.amazonaws.com
ec2-54-208-162-63.compute-1.amazonaws.com
server-205-251-219-168.arn1.r.cloudfront.net
server-205-251-219-226.arn1.r.cloudfront.net

Related

Why Openshift imagestream hash is different from Redhat registry API hash?

I'm trying to develop an automation script to notify a certain team that some container images are outdated in a certain Openshift project inside an Openshift cluster.
To do that one of the steps of the algorithm is comparing the current ImageStream hash:
oc get is -n default registry-console -o json | jq '.status.tags[0].items[0].image' | awk -F: '{ print $2 }'
then I tried to list all the tags with 3.9 in the registry and grep for the output of that first command above:
curl -sL https://registry.access.redhat.com/v1/repositories/openshift3/registry-console/tags | jq 'to_entries | select(.[].key | contains("v3.9"))' | grep $hash
And I have no matches.
Then I tried creating a new ImageStream for testing, pointing to a specific Redhat registry tag and even so the image hash generated by Openshift is different from the Redhat Openshift official registry.
Why is this happening? Is it possible to compare container image hashes in Openshift to be sure they are the same?

Kubernetes is there an analogue to docker diff?

I'm trying to figure out where a program run in a container stores its logs. But, I don't have SSH access to the machine which deployed container, only kubectl. Suppose I had SSH access, I'd do something like this:
ssh machine-running-docker 'docker diff \
$(kubectl describe pod pod-name | \
grep 'Conainer ID' | sed -E s#^[^/]+//(.+)$#\1#)'
(The regex may be imprecise, just to give the idea).
Well, for starters, an app in container should not store it's logs in files inside container. That said, it is sometimes hard to avoid when you work with 3rd party apps not configured for logging to stdout / some logging service.
Good old find to the rescue - just kubectl exec into the pod/container and find / -mmin -1 will give you all files modified in last 1 min. That should narrow the list enough for you (assuming the container lived for few minutes already).

Testing an application inside docker container in VSTS

I'm trying to test an ASP. NET Core 2 dockerized application in VSTS. It is set up inside the docker container via docker-compose. The tests make requests via addresses stored in config (or taken from environment variables, if set).
Right now, the build is set up like this:
Run compose command to restore and publish the app.
Run compose to create and run docker containers.
Run a bash script (explained below).
Run tests.
First of all, I found out that I can't use http://localhost:port inside VSTS. It works fine on my local machine, but it does not work on the server.
I've found this article that points out the need to use container's real IP to access it. I've tried 2 of the methods described in the referenced question, but none of them worked.
When using docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id, I get Template parsing error: template: :1:24: executing "" at <.NetworkSettings.Net...>: map has no entry for key "NetworkSettings" (the problem is with the command itself)
And when using docker inspect $(sudo docker ps | grep wiremocktest_microservice.gateway | head -c 12) | grep -e \"IPAddress\"\:[[:space:]]\"[0-2] | grep -o '[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}', I actually get the IP and can pass it to tests, but then something strange happens. Namely, they start to time out. I tried to replicate this locally, and it does. Every request that I make to this IP times out (easily checked in browser).
What address do I need to use to access the containers in VSTS, and why can't I use localhost?
I've run into similar problem with having a Azure Storage service running in a container for unit tests (Gradle & Kotlin project). Locally everything's working and it's possible to connect to the container by using localhost:10000 (the port is published to the host machine in run command). But this doesn't work on VSTS build pipeline and neither does when trying to connect with the IP of the container.
I've found a solution that works at least in this case: I created a custom container network and connected my Azure Storage container and the VSTS agent container to that network. After that it's possible to connect to my custom container from the tests by using the container name and internal port number e.g. my-storage-container:10000.
So I created a script that creates the container network, starts my container in that network and then connects also the VSTS agent by grepping the container ID from process list. Its' something like this:
docker network create my-custom-network
docker run --net=my-custom-network -d --name azure-storage-container -t -p 10000:10000 -v ${SCRIPT_DIR}/azurite:/opt/azurite/folder arafato/azurite
CONTAINER_ID=`docker ps -a | awk '{ print $1,$2 }' | grep microsoft/vsts-agent | awk '{print $1 }'`
docker network connect my-custom-network ${CONTAINER_ID}
After that my tests can connect to the Azure storage container with http://azure-storage-container:10000 with no problems.

Swarm rescheduling after adding new node

With the new version of Rancher is it possible to tell docker SWARM (1.12+) to redistribute containers when I add a new node in my infrastructure?
Suppose I have 4 nodes with 5 containers on each, if I add a 5th node, I'd like to redistribute my containers to have 4 of them on each node.
When a node crashes or it shuts down (scaling down my cluster), the re-scheduling triggers well, but when I scale up by adding 1 or more nodes, nothing happens.
This is not currently possible to do this. What you can do, is update a service with docker service update (i.e.: by adding an environment variable)
A new feature coming in docker 1.13 will be a force update of services, that will update the service and force the redistribution of nodes, so something like docker service update --force $(docker service ls -q) might be possible (haven't tried this yet, so can't confirm yet).
You can find more info about this feature in this blogpost
I was having the exact same issue. But rather than because of adding a new node, I had a node failure on the underling shared storage between nodes (was using shared NFS storage for sharing mount points of read only configs).
The docker service update --force $(docker service ls -q) as of version Docker version 17.05.0-ce, build 89658be does not work.
scale-up.sh:
#!/bin/bash
echo "Enter the amount by which you want to scale your services (NUMBER), followed by ENTER: "
read SCALENUM
for OUTPUT in $(docker service ls | awk '{print $2}' | sed -n '1!p')
do
echo "Scaling up "$OUTPUT" to "$SCALENUM
docker service scale $OUTPUT=$SCALENUM
done
scale-down.sh:
#!/bin/bash
for OUTPUT in $(docker service ls | awk '{print $2}' | sed -n '1!p')
do
echo "Scaling down "$OUTPUT" to 0"
docker service scale $OUTPUT=0
done
Note that the second script SCALES DOWN the service, making it unavailable. You can also use the following command as a starting point for other scripting you may need, as this prints out the service name independently of the other columns in a typical docker service ls command:
$(docker service ls | awk '{print $2}' | sed -n '1!p')
I hope this helps!

Docker - check private registry image version

What CLI commands do I need to use in order to check if the image in my private docker registry is a newer version than the one currently running on my server?
E.g. I have a container that I ran using docker run -d my.domain.com:5000/project1
and I would like to know if it is out-of-date.
Brownie points to #mbarthelemy and #amuino who put me on track. From that I was able to come up with the following bash script that others may find useful. It just checks if the tag on the registry is different from the currently executing container.
#!/bin/bash
# ensure running bash
if ! [ -n "$BASH_VERSION" ];then
echo "this is not bash, calling self with bash....";
SCRIPT=$(readlink -f "$0")
/bin/bash $SCRIPT
exit;
fi
REGISTRY="my.registry.com:5000"
REPOSITORY="awesome-project-of-awesomeness"
LATEST="`wget -qO- http://$REGISTRY/v1/repositories/$REPOSITORY/tags`"
LATEST=`echo $LATEST | sed "s/{//g" | sed "s/}//g" | sed "s/\"//g" | cut -d ' ' -f2`
RUNNING=`docker inspect "$REGISTRY/$REPOSITORY" | grep Id | sed "s/\"//g" | sed "s/,//g" | tr -s ' ' | cut -d ' ' -f3`
if [ "$RUNNING" == "$LATEST" ];then
echo "same, do nothing"
else
echo "update!"
echo "$RUNNING != $LATEST"
fi
Even when there is no command, you can use the API to check for tags on the registry and compare against what you are running.
$ curl --silent my.domain.com:5000/v1/repositories//project1/tags | grep latest
{"latest": "116f283e4f19716a07bbf48a562588d58ec107fe6e9af979a5b1ceac299c4370"}
$ docker images --no-trunc my.domain.com:5000/project1
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
my.domain.com:5000 latest 64d935ffade6ed1cca3de1b484549d4d278a5ca62b31165e36e72c3e6ab8a30f 4 days ago 583.2 MB
By comparing the ids, you can know that you are not running the latest version.
Not sure about the version but if you mean the tag of image, it can be easily checked through the registry v2 api . Note that in context of docker images tag has nothing to do with the version of software.
Use curl command in CLI
curl <docker_host_ip>:<docker_host_port>/v2/<repository_name>/tags/list
To get a list of repositories pushed on the private registry, use
curl <docker_host_ip>:<docker_host_port>/v2/_catalog
AFAIK, this is not possible right now.
The only thing I see would be to pull the registry to check if there is a new version of your image (would then have a different ID than your locally stored image):
docker pull your/image:tag
But yes, that would mean fetching the new images (if any).
If you have a look at the registry API documentation, you'll see that if you don't mind scripting a bit, you could get this information without actually downloading the image, by fetching the image tags and check if the returned ID for the tag matches the ID of the local image you have with the same tag.
That being said, having something to "check for updates" integrated into the docker CLI would be a nice addition.
I don't know if this works as advertised. Just a quick hack I just put together.
But this will at least give you a little push on how this might be done.
#!/bin/bash
container=$1
imageid=$(docker inspect --format '{{.Config.Image}}' ${container})
echo "Running version from: $(docker inspect --format '{{.Created}}' ${container})"
echo "Image version from: $(docker inspect --format '{{.Created}}' ${imageid})"
Example output:
[root#server ~]# sh version_check.sh 9e500019b9d4
Running version from: 2014-05-30T08:24:08.761178656Z
Image version from: 2014-05-01T16:48:24.163628504Z
You can use a bash script running in a cron scheduled task:
#!/bin/bash
docker_instance='YOUR_RUNNING_INSTANCE'
instance_id=$(docker ps -qa --filter name=$docker_instance)
image_name_tag=$(docker inspect $instance_id | jq -r [] |.Config.Image')
if [ "-${image_name_tag}-" != "--" ]; then
status=$(docker pull $image_name_tag | grep "Downloaded newer image")
if [ "-${status}-" != "--" ]; then
echo ">>> There is one update for this image ... "
# stop the docker instance
docker stop $docker_instance
# remove the docker instance
docker rm $docker_instance
# restart the docker using the last command, using the new image from the remote repository
run-my-docker-instance.sh
fi
fi
An older question, but this sounds like a problem that Watchtower can solve for you. It is another dockerized application that runs adjacent to your other containers and periodically check to see if their base images are updated. When they are, it downloads the new image and restarts them.
If given the correct credentials, it can work with a local registry.
FWIW I solved it with the bash script below for a while until I decided that Watchtower was the easier way to go (by the way: note the maintainer switched from v2tec to containrrr a while ago, the v2tec one isn't getting updates anymore). Watchtower gave me an easy way to schedule things without having to rely on cron (which gets blown away in a reinstall - granted, you could have something like Ansible recreate that for you, but this was easier for me). It also adds easy notifications (I like using Telegram) for updates, which I appreciate knowing about so that if something goes sideways at least I know there was an update that could be to blame.
I'm not saying this is will never cause issues, but I've been running Watchtower on various Docker hosts (3 of them, 2 in my homelab, one on Linode) for about a year now and I have yet to have an issue with it. I prefer this to having to manually update my containers on a regular basis. For me the risk of something getting screwed up is lower than the risks of running outdated containers, so this is what I chose for myself. YMMV.
I honestly don't get the apparent hate for automated update solutions like Watchtower - I see so many comments saying that you shouldn't use automated updates because they'll cause problems... I don't know what folks have been burned by - would love to hear more about where this caused problems for you! I mean that, I genuinely don't understand and would love to learn more. I keep having some vague unease about doing automated updates, but given my experience so far I can honestly only recommend it. I used to use Diun for getting notified about updates and then would go and manually update my containers. That got real old after it became a daily chore! (With ~45 different containers running, you can pretty much guarantee that at least one of them will have an update every single day.)
If I really need to keep a container from updating, I can always stick a com.centurylinklabs.watchtower.enable=false label on the container. Or you can whitelist only the containers you want automatically updated, or... There are loads of possibilities with Watchtower.
However, for reference if you still want to use it, see my script below. I used docker-compose pull to get the latest version - it does a check first to see if there is a new image, so doesn't waste a whole lot of bandwidth if there is nothing to update. It's effectively like doing the curl you guys used. Also I prefer the docker inspect -f commands to check the versions to the solutions that pipe through grep, sed, and co. since that is less likely to get broken by changes to docker inspect output format.
#!/usr/bin/env bash
cd /directory/with/docker-compose.yml/
image_name=your-awesome-image
docker-compose pull
container_version=$(docker inspect -f '{{ index .Config.Labels "org.opencontainers.image.version" }}' "$image_name")
latest_image_version=$(docker inspect -f '{{ index .Config.Labels "org.opencontainers.image.version" }}' "$image_name")
if [[ "$container_version" != "$latest_image_version" ]]; then
echo "Upgrading ${image_name} from ${container_version} to ${latest_image_version}"
docker-compose down
docker-compose up -d
fi

Resources