Docker Content Trust "offline" possible/reasonable? - docker

I'm currently faced with the task of signing our internally created docker images stored in an Artifactory docker repository on premise.
We have a target environment which (currently) has no access to the internet nor to our internal docker registry.
I've learned so far, that by enabling docker content trust with
export DOCKER_CONTENT_TRUST=1
on the machine building the images is mandatory. As far as I understood the documentation the procedure is:
Enable docker content trust on the build client
use docker push which will generate the root and targets key
Store the key(s) in a save location
Upload the image to artifactory
Is it correct, that with 2. the official notary server is/must be used to verify, that the image is indeed signed by our company?
I'm just wondering if our current deployment scenario can use docker content trust:
Store image as myDockerImage.tar.gz (i.e. docker save <IMAGE_NAME>)
copy tar.gz file to target machine
use docker load -i <FILENAME>.tar.gz to import image to local registry on the target machine
docker run (< Must fail if image is not signed by our key)
As already stated the target machine neither has access to our infrastructure nor to the internet. Is it advisable to use docker content trust for this "offline" scenario? Is there a keyfile that can be put on the target machine instead of having a connection to the notary server?

After digging for a while I came to the conclusion, that using
shasum256 <FILE(S)> deployment-files.sha256
in combination with a signature using which signs the deployment-files.sha256 via openssl is my best option.

For totally offline you can't rely on docker trust
You can associate the image repo digest (the sha256 you can use during docker pull) with the digest of a saved image file:
#!/bin/sh
set -e
IMAGE_TAG=alpine:3.14
IMAGE_HASH=sha256:a573d30bfc94d672abd141b3bf320b356e731e3b1a7d79a8ab46ba65c11d79e1
docker pull ${IMAGE_TAG}#${IMAGE_HASH}
# save the image id so the recipient can find the image after docker load
IMAGE_ID=$(docker image inspect ${IMAGE_TAG}#${IMAGE_HASH} -f {{.Id}})
echo ${IMAGE_ID} > ${IMAGE_TAG}.id
docker save -o ${IMAGE_TAG} ${IMAGE_TAG}#${IMAGE_HASH}
sha256sum ${IMAGE_TAG} ${IMAGE_TAG}.id > ${IMAGE_TAG}.sha256
You now have an image that can be used with docker load, and a digest for that image that you can verify at the receiver (assuming you trust how you convey the digest file to them - ie, you can sign it)
The recipient of the image files can then do something like:
sha256sum -c alpine:3.14.sha256
docker load -i alpine:3.14
docker inspect $(cat alpine:3.14.id)

Related

Docker: get list of all the registries configured on a host

Can docker be connected to more than one registry at a time and how to figure out which registries it is currently connected too?
$ docker help | fgrep registr
login Log in to a Docker registry
logout Log out from a Docker registry
pull Pull an image or a repository from a registry
push Push an image or a repository to a registry
As you can see, there is no option to list the registries. I did find
a way by running:
$ docker system info | fgrep -i registr
Registry: https://index.docker.io/v1/
So... one regsitry at a time only? It is not like apt where one can point to more than one source? Anybody can point me to some good documentation about docker and registries?
Oddly, I search the web to no vail.
Aside from docker login, Docker isn't "connected to a registry" per se. Registry names are part of the image name, and Docker will connect to a registry server if it needs to pull an image.
As a specific example, the official Docker image for Elasticsearch is on a non-default registry run by Elastic. The example in that documentation is
docker pull docker.elastic.co/elasticsearch/elasticsearch:7.17.0
# ^^^^^^^^^^^^^^^^^
# registry host name
You don't need to otherwise configure your system to connect to that registry, download an index, or anything else. In fact, you don't even need this docker pull command; if you directly docker run the image, Docker will download it if it doesn't have a copy locally.
The default registry is Docker Hub, docker.io, and this cannot be changed.
There are several alternate registries out there. The various public-cloud providers each have their own, and there are also several free-standing image registries. Each has its own instructions on how to set it up. You always need to include the registry name as part of the image name. The Google Container Registry has a simple name syntax, for example, so if you use GCR then you can
# build an image locally, labeled to be stored in GCR
# (this step does not contact or use GCR at all)
docker build gcr.io/my-name/my-image:tag
# authenticate to the registry
# (normally GCR has a Google-specific login sequence)
docker login https://gcr.io
# push the image
docker push gcr.io/my-name/my-image:tag
# run the image, pulling it if not present
docker run ... gcr.io/my-name/my-image:tag

Import an image from an archive

I need to deploy selenium/standalone-chrome image to docker.
The problem is that I use corporative openshift with private registry. There is no possibility to upload image to registry or load it thru the docker (docker service is not exposed).
I managed to export tar file from local machine using command 'docker save -o'. I uploaded this image to artifactory as an artifact and now can download it.
Question: how can I create or import image based on a binary archive with layers?
Thanks in advance.
Even you're using OpenShift, you can proceed to Docker push since the registry is exposed by default: you need your username (oc whoami) along with the token oc whoami --show-token.
Before proceeding, make sure you have an Image Stream since it's mandatory in order to push images.
Once obtained these data, proceed to login from your host:
docker login -u `oc whoami` -p `oc whoami --show-token` registry.your.openshift.fqdn.tld:443
Now, you just need to build your image
docker build . -t registry.your.openshift.fqdn.tld:443/your-image-stream/image-name:version
Finally, push it!
docker push registry.your.openshift.fqdn.tld:443/your-image-stream/image-name:version

What Docker command can I use after login to Docker registry?

I am new to Docker. I know the default registry is 'docker hub'. And there are tutorials on navigating 'Docker Hub', e.g. search image etc. But that kind of operations are performed in Docker Hub UI via web.
I was granted a private Docker registry. After I login using the command like docker login someremotehost:8080, I do not know what command to use to navigate around inside the registry. I do not know what images are available and what their tags are.
Could anyone share some info/link on what command to use to explore private remote registry after user login?
Also, to use images from the private registry, the name I need to use becomes something like 'my.registry.address:port/repositoryname.
Is there a way to change the configuration of my docker application, so that it will make my.registry the default registry, and I can just use repositoryname, without specifying registry name in every docker command?
There are no standard CLI commands to interact with remote registries beyond docker pull and docker push. The registry itself might provide some sort of UI (for example, Amazon ECR can list images through the standard AWS console), or your local development team might have a wiki that lists out what's generally available.
You can't change the default Docker registry. You have a pretty strong expectation that e.g. ubuntu is really docker.io/library/ubuntu and not something else.
For the Docker there are only two commands for communication of registry:
Docker Pull and Docker Push
And about the docker private registry there is no any default setting in docker to get the pull from only from the specific registry. The reason for this is name of docker image.For official docker image there is direct name like Centos . But in the docker registry there is also some images which is created by non-official organisation or person. In that kind of docker images there is always name of user or organisation like pivotaldata/centos. So this naming convention is help to docker find the image in docker registry in public(via login) or public registry.
In the case you want to interact more with private repo you can write your own batch script or bash script. Like I have created a batch script which pull all the tag from the private repo if user give the wrong tag.
#echo off
docker login --username=xxxx --password=xxxx
docker pull %1:%2
IF NOT %ERRORLEVEL%==0 (
echo "Specified Version is Not Found "
echo "Available Version for this image is :"
for /f %%i in (' curl -s -H "Content-Type:application/json" -X POST -d "{\"username\":\"user\",\"password\":\"password\"}" https://hub.docker.com/v2/users/login ^|jq -r .token ') do set TOKEN=%%i
curl -sH "Authorization: JWT %TOKEN%" "https://hub.docker.com/v2/repositories/%1/tags/" | jq .results[].name
)

How to run container in a remote docker host with Jenkins

I have two servers:
Server A: Build server with Jenkins and Docker installed.
Server B: Production server with Docker installed.
I want to build a Docker image in Server A, and then run the corresponding container in Server B. The question is then:
What's the recommended way of running a container in Server B from Server A, once Jenkins is done with the docker build? Do I have to push the image to Docker hub to pull it in Server B, or can I somehow transfer the image directly?
I'm really not looking for specific Jenkins plugins or stuff, but rather, from a security and architecture standpoint, what's the best approach to accomplish this?
I've read a ton of posts and SO answers about this and have come to realize that there are plenty of ways to do it, but I'm still unsure what's the ultimate, most common way to do this. I've seen these alternatives:
Using docker-machine
Using Docker Restful Remote API
Using plain ssh root#server.b "docker run ..."
Using Docker Swarm (I'm super noob so I'm still unsure if this is even an option for my use case)
Edit:
I run Servers A and B in Digital Ocean.
Docker image can be saved to a regular tar archive:
docker image save -o <FILE> <IMAGE>
Docs here: https://docs.docker.com/engine/reference/commandline/image_save/
Then scp this tar archive to another host, and run docker load to load the image:
docker image load -i <FILE>
Docs here: https://docs.docker.com/engine/reference/commandline/image_load/
This save-scp-load method is rarely used. The common approach is to set up a private Docker registry behind your firewall. And push images to or pull from that private registry. This doc describes how to deploy a container registry. Or you can choose registry service provided by a third party, such as Gitlab's container registry.
When using Docker repositories, you only push/pull the layers which have been changed.
You can use Docker REST API. Jenkins HTTP Request plugin can be used to make HTTP requests. You can run Docker commands directly on a remote Docker host setting the DOCKER_HOST environment variable. To export an the environment variable to the current shell:
export DOCKER_HOST="tcp://your-remote-server.org:2375"
Please be aware of the security concerns when allowing TCP traffic. More info.
Another method is to use SSH Agent Plugin in Jenkins.

How to share my Docker-Image without using the Docker-Hub?

I'm wondering where Docker's images are exactly stored to in my local host machine.
Can I share my Docker-Image without using the Docker-Hub or a Dockerfile but the 'real' Docker-Image? And what is exactly happening when I 'push' my Docker-Image to Docker-Hub?
Docker images are stored as filesystem layers. Every command in the Dockerfile creates a layer. You can also create layers by using docker commit from the command line after making some changes (via docker run probably).
These layers are stored by default under /var/lib/docker. While you could (theoretically) cherry pick files from there and install it in a different docker server, is probably a bad idea to play with the internal representation used by Docker.
When you push your image, these layers are sent to the registry (the docker hub registry, by default… unless you tag your image with another registry prefix) and stored there. When pulling, the layer id is used to check if you already have the layer locally or it needs to be downloaded. You can use docker history to peek at which layers (other images) are used (and, to some extent, which command created the layer).
As for options to share an image without pushing to the docker hub registry, your best options are:
docker save an image or docker export a container. This will output a tar file to standard output, so you will like to do something like docker save 'dockerizeit/agent' > dk.agent.latest.tar. Then you can use docker load or docker import in a different host.
Host your own private registry. - Outdated, see comments See the docker registry image. We have built an s3 backed registry which you can start and stop as needed (all state is kept on the s3 bucket of your choice) which is trivial to setup. This is also an interesting way of watching what happens when pushing to a registry
Use another registry like quay.io (I haven't personally tried it), although whatever concerns you have with the docker hub will probably apply here too.
Based on this blog, one could share a docker image without a docker registry by executing:
docker save --output latestversion-1.0.0.tar dockerregistry/latestversion:1.0.0
Once this command has been completed, one could copy the image to a server and import it as follows:
docker load --input latestversion-1.0.0.tar
Sending a docker image to a remote server can be done in 3 simple steps:
Locally, save docker image as a .tar:
docker save -o <path for created tar file> <image name>
Locally, use scp to transfer .tar to remote
On remote server, load image into docker:
docker load -i <path to docker image tar file>
[Update]
More recently, there is Amazon AWS ECR (Elastic Container Registry), which provides a Docker image registry to which you can control access by means of the AWS IAM access management service. ECR can also run a CVE (vulnerabilities) check on your image when you push it.
Once you create your ECR, and obtain the "URL" you can push and pull as required, subject to the permissions you create: hence making it private or public as you wish.
Pricing is by amount of data stored, and data transfer costs.
https://aws.amazon.com/ecr/
[Original answer]
If you do not want to use the Docker Hub itself, you can host your own Docker repository under Artifactory by JFrog:
https://www.jfrog.com/confluence/display/RTF/Docker+Repositories
which will then run on your own server(s).
Other hosting suppliers are available, eg CoreOS:
http://www.theregister.co.uk/2014/10/30/coreos_enterprise_registry/
which bought quay.io

Resources