I have created my private docker registry running on localhost:5000/v1 but it does not provide authentication, How to have username and password so that only authorized users can push an image to it.
I am also not able to list all the images present in private registry, all document says running below command will list it localhost:5000/v1/search but it gives a blank json response as:
{
"num_results": 0,
"query": "",
"results": []
}
How to resolve this?
Thanks,
Yash
An answer to your first question: You need to use something like nginx in front of the registry to do the actual password authentication. There are example nginx configuration files for pre-1.3.9 nginx and later versions in the Docker Registry Github repo for wrapping the registry with nginx; there is more information on authentication configuration on the nginx wiki.
You can use htpasswd to setup a login with dockers registry image. However, I don't believe they have implemented a search function in this image yet. To create a user, I have the following script:
#!/bin/sh
usage() { echo "$0 user"; exit 1; }
if [ $# -ne 1 ]; then
usage
fi
user=$1
cd `dirname $0`
if [ ! -d "auth" ]; then
mkdir -p auth
fi
chmod 666 auth/htpasswd
docker run --rm -it \
-v `pwd`/auth:/auth \
--entrypoint htpasswd registry:2 -B /auth/htpasswd $user
chmod 444 auth/htpasswd
Then to run the registry, I use the following script (from the same folder):
#!/bin/sh
cd `dirname $0`
docker run -d -p 5000:5000 --restart=always --name registry \
-v `pwd`/certs:/certs:ro \
-v `pwd`/auth/htpasswd:/auth/htpasswd:ro \
-v `pwd`/registry:/var/lib/registry \
-e "REGISTRY_HTTP_TLS_CERTIFICATE=/certs/host-cert.pem" \
-e "REGISTRY_HTTP_TLS_KEY=/certs/host-key.pem" \
-e "REGISTRY_AUTH=htpasswd" \
-e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" \
-e "REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd" \
-e "REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY=/var/lib/registry" \
registry:2
Note that I'm also using TLS certificates in the above under the certs directory. You can create these with openssl commands (same ones used for securing the docker daemon socket).
Related
I am running docker "rootless" according to this guide: https://docs.docker.com/engine/security/rootless/
The user which actually runs docker is svc_test.
When I try and start a docker container which has diretory mounts which don't exists - the docker daemon (a.k.a. svc_test user) attempts to mkdir these directories, but fails with
docker: Error response from daemon: error while creating mount source path '/dir_path/dir_name': mkdir /dir_path/dir_name: permission denied.
When I (svc_test) them attempt to do mkdir /dir_path/dir_name I succeed without any issues.
What is going on here and why does this happen?
Clearly I am missing something, but I can't trace what is that exactly.
Update 1:
This is the specific docker cmd I use to run the container:
docker run -d --restart unless-stopped \
--name questdb \
-e QDB_METRICS_ENABLED=TRUE \
--network="host" \
-v /my_mounted_volume/questdb:/questdb \
-v /my_mounted_volume/questdb/public:/questdb/public \
-v /my_mounted_volume/questdb/conf:/questdb/conf \
-v /my_mounted_volume/questdb/db:/questdb/db \
-v /my_mounted_volume/questdb/log:/questdb/log \
questdb/questdb:6.5.2 /usr/bin/env QDB_PACKAGE=docker /app/bin/java \
-m io.questdb/io.questdb.ServerMain \
-d /questdb \
-f
For clarity: my final goal is to be able to run the docker container in question from the same user form which I run my docker daemon (the svc_test user). Hence how I stumbled on this problem.
I'm trying to create a docker container that will let me run firefox, so I can eventually use a jupyter notebook. Right now, although I have successfully installed firefox, I cannot get a window to open.
Following instructions from running-gui-apps-within-docker, I created an image (i.e. "sample") with Firefox and then tried to run it using
$ docker run -it --rm -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix --net=host sample
When I did so, I got the following error:
root#machine:~# firefox
No protocol specified
Unable to init server: Could not connect: Connection refused
Error: cannot open display: :1
Using man docker run to understand the flags, I was not able to find the --net flag, though I did see a --network flag. However, replacing --net with --network didn't change anything. How do I specify a protocol, that will let me create an image from whose containers I will be able to run firefox?
PS - For what it's worth, when I check the value of DISPLAY, I get the predictable:
~# echo $DISPLAY
:1
I have been running firefox inside docker for quite some time so this is possible. With regards to the security aspects I think the following is the relevant parts:
Building
The build needs to match up uid/gid values with the user that is running the container. I do this with UID and GID build args:
Dockerfile
...
FROM fedora:35 as runtime
ENV DISPLAY=:0
# uid and gid in container needs to match host owner of
# /tmp/.docker.xauth, so they must be passed as build arguments.
ARG UID
ARG GID
RUN \
groupadd -g ${GID} firefox && \
useradd --create-home --uid ${UID} --gid ${GID} --comment="Firefox User" firefox && \
true
...
ENTRYPOINT [ "/entrypoint.sh" ]
Makefile
build:
docker pull $$(awk '/^FROM/{print $$2}' Dockerfile | sort -u)
docker build \
-t $(USER)/firefox:latest \
-t $(USER)/firefox:`date +%Y-%m-%d_%H-%M` \
--build-arg UID=`id -u` \
--build-arg GID=`id -g` \
.
entrypoint.sh
#!/bin/sh
# Assumes you have run
# pactl load-module module-native-protocol-tcp auth-ip-acl=127.0.0.1 auth-anonymous=1
# on the host system.
PULSE_SERVER=tcp:127.0.0.1:4713
export PULSE_SERVER
if [ "$1" = /bin/bash ]
then
exec "$#"
fi
exec /usr/local/bin/su-exec firefox:firefox \
/usr/bin/xterm \
-geometry 160x15 \
/usr/bin/firefox --no-remote "$#"
So I am running firefox as a dedicated non-root user, and I wrap it via xterm so that the container does not die if firefox accidentally exit or if you want to restart. It is a bit annoying having all these extra xterm windows, but I have not found any other way in preventing accidental loss of the .mozilla directory content (mapping out to a volume would prevent running multiple independent docker instances which I definitely want, and also from a privacy point of view not dragging along a long history is something I want. Whenever I do want to save something I make a copy of the .mozilla directory and save it on the host computer (and restore later in a new container)).
Running
run.sh
#!/bin/bash
export XSOCK=/tmp/.X11-unix
export XAUTH=/tmp/.docker.xauth
touch ${XAUTH}
xauth nlist ${DISPLAY} | sed -e 's/^..../ffff/' | uniq | xauth -f ${XAUTH} nmerge -
DISPLAY2=$(echo $DISPLAY | sed s/localhost//)
if [ $DISPLAY2 != $DISPLAY ]
then
export DISPLAY=$DISPLAY2
xauth nlist ${DISPLAY} | sed -e 's/^..../ffff/' | uniq | xauth -f ${XAUTH} nmerge -
fi
ARGS=$(echo $# | sed 's/[^a-zA-Z0-9_.-]//g')
docker run -ti --rm \
--user root \
--name firefox-"$ARGS" \
--network=host \
--memory "16g" --shm-size "1g" \
--mount "type=bind,target=/home/firefox/Downloads,src=$HOME/firefox_downloads" \
-v ${XSOCK}:${XSOCK} \
-v ${XAUTH}:${XAUTH} \
-e XAUTHORITY=${XAUTH} \
-e DISPLAY=${DISPLAY} \
${USER}/firefox "$#"
With this you can for instance run ./run.sh https://stackoverflow.com/ and get a container named firefox-httpsstackoverflow.com. If you then want to log into your bank completely isolated from all other firefox instances (protected by operating system process boundaries, not just some internal browser separation) you run ./run.sh https://yourbank.example.com/.
Try run xhost + in your docker host to allow conections with X server.
I was able to run repository using $docker container run -itd --publish 5000:5000 registry But, I am not asked for username and password when I pull or push the image to that repository.
How to set username and password for our own docker private registry and how to use them in Dockerfile and docker-compose when we want to use the image from that repository?
How to set username and password for our own docker private registry?
There are couple ways to implement basic auth in DTR. The simplest way is to put the DTR behind a web proxy and use the basic auth mechanism provided by the web proxy.
To enable basic auth in DTR directly? This is how.
Create a password file containing username and password: mkdir auth && docker run --entrypoint htpasswd registry:2 -Bbn your-username your-password > auth/htpasswd.
Stop DTR: docker container stop registry.
Start DTR again with basic authentication, see commands below.
Note: You must configure TLS first for authentication to work.
docker run -d \
-p 5000:5000 \
--restart=always \
--name registry \
-v `pwd`/auth:/auth \
-e "REGISTRY_AUTH=htpasswd" \
-e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" \
-e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \
-v `pwd`/certs:/certs \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
registry:2
How to use them when we want to use the image from that repository?
Before pulling images, you need first to login to the DTR:
docker login your-domain.com:5000
And fill in the username and password from the first step.
I'm running a Container-Optimized OS VM on GCE (with Docker 17.03.2) and would like to use docker-compose to manage the containers. docker-compose isn't installed on COS, but it can be run from a container using the image docker/compose, as described in this tutorial:
docker run \
-v /var/run/docker.sock:/var/run/docker.sock \
-v "$PWD:/rootfs/$PWD" \
-w="/rootfs/$PWD" \
docker/compose:1.14.0 up
The images I want to access are in a private Google Container Registry, which requires a docker login for pull access. How can I run the docker/compose image to access the private registry?
The COS VM is already authorized to access the registry, and I have a service account JSON file on the VM, but can that be passed to the compose image to login before running the up command?
You want to use this method to authenticate.
Using the _json_key anthentication from GCR's advanced authentication docs, does the following script work?
docker run \
-v /var/run/docker.sock:/var/run/docker.sock \
-v "$PWD:/rootfs/$PWD" \
-w="/rootfs/$PWD" \
docker/compose:1.14.0 \
/bin/bash -c "docker login -u _json_key -p $(cat keyfile.json) https://gcr.io; up"
The best solution I found was to authenticate on the Docker host and then mount the docker config into the docker-compose container:
docker login -u _json_key -p "$(cat keyfile.json)" https://gcr.io
docker run \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /root/.docker:/root/.docker \
-v "$PWD:$PWD" \
-w="$PWD" \
docker/compose:1.14.0 \
up
An alternative to directly using the service account JSON credentials, given the COS VM is already authorized to access the registry (e.g. the attached service account has GCS view access to the project hosting the image), is to run the /usr/share/google/dockercfg_update.sh script shipped with COS:
#!/bin/sh
# Copyright 2015 The Chromium OS Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
set -eu
AUTH_DATA="$(curl -s -f -m 10 "http://metadata/computeMetadata/v1/instance/service-accounts/default/token" \
-H "Metadata-Flavor: Google")"
R=$?
if [ ${R} -ne 0 ]; then
echo "curl for auth token exited with status ${R}" >&2
exit ${R}
fi
AUTH="$(echo "${AUTH_DATA}" \
| tr -d '{}' \
| sed 's/,/\n/g' \
| awk -F ':' '/access_token/ { print "_token:" $2 }' \
| tr -d '"\n' \
| base64 -w 0)"
if [ -z "${AUTH}" ]; then
echo "Auth token not found in AUTH_DATA ${AUTH_DATA}" >&2
exit 1
fi
D="${HOME}/.docker"
mkdir -p "${D}"
cat > "${D}/config.json" <<EOF
{
"auths":{
"https://container.cloud.google.com":{"auth": "${AUTH}"},
"https://gcr.io":{"auth": "${AUTH}"},
"https://b.gcr.io":{"auth": "${AUTH}"},
"https://us.gcr.io":{"auth": "${AUTH}"},
"https://eu.gcr.io":{"auth": "${AUTH}"},
"https://asia.gcr.io":{"auth": "${AUTH}"},
"https://beta.gcr.io":{"auth": "${AUTH}"}
}
}
EOF
This has the benefits of being maintained by Google and avoids having to manage service account credentials.
I want to setup a private registry behind a nginx server. To do that I configured nginx with a basic auth and started a docker container like this:
docker run -d \
-e STANDALONE=true \
-e INDEX_ENDPOINT=https://docker.example.com \
-e SETTINGS_FLAVOR=local \
-e STORAGE_PATH=/home/example/registry \
-p 5000:5000 \
registry
By doing that, I can login to my registry, push/pull images... But if I stop the container and start it again, everything is lost. I would have expected my registry to be save in /home/example/registry but this is not the case. Can someone tell me what I missed ?
I would have expected my registry to be save in /home/example/registry but this is not the case
it is the case, only the /home/exemple/registry directory is on the docker container file system, not the docker host file system.
If you run your container mounting one of your docker host directory to a volume in the container, it would achieve what you want:
docker run -d \
-e STANDALONE=true \
-e INDEX_ENDPOINT=https://docker.example.com \
-e SETTINGS_FLAVOR=local \
-e STORAGE_PATH=/registry \
-p 5000:5000 \
-v /home/example/registry:/registry \
registry
just make sure that /home/example/registry exists on the docker host side.