How to use docker-credential-pass to login to a private registry? - docker

Docker by default saves passwords unencrypted on disk, encoded in base64. I want to securely store a login password using docker-credentials-pass keystore plugin to log in to my private registry.
https://github.com/docker/docker-credential-helpers/
I am stucked at this issue: https://github.com/docker/docker-credential-helpers/issues/102
I've tried everything the users comment and I couldn't find any documentation for docker and pass. I google some tutorials as well without success. I restarted docker multiple times when trying and it just doesn't work. I would appreciate some help if someone knows how to set it up.

Don't know if it's still relevant to you but this worked for us (rh7 system):
Generate a new gpg2 key with gpg2 --gen-key and select all the default answers (apart from name, mail and passphrase). The output you get should contain a row that looks something like this:
pub 2048R/A154BD21 2019-09-12
Take the part after the / and init your pass with pass init <after-slash-part>, so in this example pass init A154BD21.
Add the line "credsStore":"pass" to your ~/.docker/config.json so that it looks something like
{
"credsStore":"pass"
}
Make sure that the location of your docker-credential-pass file is in your $PATH environment variable.
Now try logging in. If it's not working, please describe a bit more in detail what you do and if you get any error messages, etc.

I went with a bash script like this that automates much of the process.
#!/bin/sh
# Sets up a docker credential helper so docker login credentials are not stored encoded in base64 plain text.
# Uses the pass secret service as the credentials store.
# If previously logged in w/o cred helper, docker logout <registry> under each user or remove ~/.docker/config.json.
# Tested on Ubuntu 18.04.5 LTS.
if ! [ $(id -u) = 0 ]; then
echo "This script must be run as root"
exit 1
fi
echo "Installing dependencies"
apt update && apt-get -y install gnupg2 pass rng-tools jq
# Check for later releases at https://github.com/docker/docker-credential-helpers/releases
version="v0.6.3"
archive="docker-credential-pass-$version-amd64.tar.gz"
url="https://github.com/docker/docker-credential-helpers/releases/download/$version/$archive"
# Download cred helper, unpack, make executable, and move it where Docker will find it.
wget $url \
&& tar -xf $archive \
&& chmod +x docker-credential-pass \
&& mv -f docker-credential-pass /usr/local/bin/
# Done with the archive
rm -f $archive
config_path=~/.docker
config_filename=$config_path/config.json
# Could assume config.json isn't there or overwrite regardless and not use jq (or sed etc.)
# echo '{ "credsStore": "pass" }' > $config_filename
if [ ! -f $config_filename ]
then
if [ ! -d $config_path ]
then
mkdir -p $config_path
fi
# Create default docker config file if it doesn't exist (never logged in etc.). Empty is fine currently.
cat > $config_filename <<EOL
{
}
EOL
echo "$config_filename created with defaults"
else
echo "$config_filename already exists"
fi
# Whether config is new or existing, read into variable for easier file redirection (cat > truncate timing)
config_json=`cat $config_filename`
if [ -z "$config_json" ]; then
# Empty file will prevent jq from working
$config_json="{}"
fi
# Update Docker config to set the credential store. Used sed before but messy / edge cases.
echo "$config_json" | jq --arg credsStore pass '. + {credsStore: $credsStore}' > $config_filename
# Output / verify contents
echo "$config_filename:"
cat $config_filename | jq
# Help with entropy to prevent gpg2 full key generation hang
# Feeds data from a random number generator to the kernel's random number entropy pool
rngd -r /dev/urandom
# To cleanup extras from multiple runs: gpg --delete-secret-key <key-id>; gpg --delete-key <key-id>
echo "Generating GPG key, accept defaults but consider key size to 2048, supply user info"
gpg2 --full-generate-key
echo "Adjusting permissions"
sudo chown -R $USER:$USER ~/.gnupg
sudo find ~/.gnupg -type d -exec chmod 700 {} \;
sudo find ~/.gnupg -type f -exec chmod 600 {} \;
# List keys
gpg2 -k
key=$(gpg2 --list-secret-keys | grep uid -B 1 | head -n 1 | sed 's/^ *//g')
echo "Initializing pass with key $key"
pass init $key
echo "Enter a password to add to the secure store"
pass insert docker-credential-helpers/docker-pass-initialized-check
# Just a verification. Don't need to show actual password, mask it.
echo "Password verification:"
pass show docker-credential-helpers/docker-pass-initialized-check | sed -e 's/\(.\)/\*/g'
echo "Docker credential password list (empty initially):"
docker-credential-pass list
echo "Done. Ready to test. Run: sudo docker login <registry>"
echo "Afterwards run: sudo docker-credential-pass list; sudo cat ~/.docker/config.json"

The docker-credentials-pass helper doesn't setup a pass-based password store - it expects an already functional password store, so I would advise you to first set that up before incorporating the credentials helper
Pass is a password manager that is essentially a bash script that automates encrypting/decrypting secrets using GnuPG. That means a working pass setup requires each of those tools to function: pass and gpg2. Optionally the password store can be a git repository, in which case you'll need git installed as well.
After downloading pass and setting up GnuPG, initialize the password store with your gpg id. From the docs (assuming "ZX2C4 Password Storage Key" is your gpg id):
$ pass init "ZX2C4 Password Storage Key"
mkdir: created directory ‘/home/zx2c4/.password-store’
Password store initialized for ZX2C4 Password Storage Key.
You should then be able to add passwords using the pass command, and if that works then enable the docker-credentials-pass helper as you've already done.
https://www.passwordstore.org/

Related

Export docker registry credentials from Windows/WSL

I'm doing a couple of experiments for a Kubernetes-based local dev environment and for that I'm exporting my local Docker registry credentials like this:
$ kubectl create secret generic -n default regcred \
--from-file=.dockerconfigjson=/home/user/.docker/config.json \
--type=kubernetes.io/dockerconfigjson
This works fine for me (Linux without a desktop environment), but fails for my colleagues using any sort of credentials store, in particular those on Windows/WSL2. Their .docker/config.json files do not contain credentials, but instead a reference to credStore called desktop.exe, which I can only assume to be "Docker Desktop".
Is there a way I could extract credentials from the Windows credential store (mostly) automatically? It's of course OK to make the person executing the script confirm credential store access, but the remainder of the process should ideally be automated.
In the end, it hasn't been all that difficult. I've written a Bash script (also requires jq) which extracts credentials from the credentials store and combines them with the original .docker/config.json.
#!/usr/bin/env bash
set -ue
CREDHELPER=$(jq -r .credsStore < "${HOME}/.docker/config.json" )
STR=.
if [[ -n "$CREDHELPER" && "$CREDHELPER" != "null" ]]; then
CRED_BINARY=docker-credential-$CREDHELPER
REGS=$($CRED_BINARY list | jq -r 'keys | join(" ")' )
for REG in $REGS; do
CRED=$(echo $REG | $CRED_BINARY get | jq -rj '"\(.Username):\(.Secret)"' | base64 -w 0)
STR="$STR * { \"auths\": { \"$REG\": { \"auth\": \"$CRED\" }}}"
done
fi
jq -r "$STR" "${HOME}/.docker/config.json"

SSH keys for Docker executor

I have created an image where I run some tasks.
I want to be able to push some files to a remote server that runs Windows Server 2022.
The gitlab-runner runs on an Ubuntu machine.
I managed to do that using shell executors. But now I want to do the same inside a docker container.
Using the following guide
https://docs.gitlab.com/ee/ci/ssh_keys/#ssh-keys-when-using-the-docker-executor
I don't understand in which user I will create the keys.
In a shell executor case I used gitlab-runner user in which I created a pair of keys. I added the public key to the server that I want to push files to and it worked.
However, I added the same private key into the gitlab CI/CD variable as the guide suggests.
Then inside the job I added the following:
before_script:
- 'command -v ssh-agent >/dev/null || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
script:
- scp -P <port> myfile.txt username#ip:remote_path
But the job fails with errors
Host key verification failed.
lost connection
Should I use the same private key from gitlab-runner user?
PS: The echo "$SSH_PRIVATE_KEY" works. I can see the key I added in the gitlab CI/CD variable.
I used something similar in my CI process, works like a charm, I recall I've used some base64 formatted runner key due to some formatting errors:
- echo $GITLAB_RUNNER_SSH_KEY | base64 -d > $HOME/.ssh/runner_key
- chmod -R 600 ~/.ssh
- eval $(ssh-agent -s)
- ssh-add $HOME/.ssh/runner_key

How to use chown in Nix derivations?

I'm trying to package a script to perform deployments with a git push in a Nix derivation.
The goal is having a git repo that on post-receive run some actions.
I want to package it so that I can just keep it alongside my configuration and ship it easily, minimising the amount of manual tasks to do.
I've already setup a git user:
users.users.git = {
isNormalUser = true;
shell = "/run/current-system/sw/bin/git-shell";
home = "/home/git";
openssh.authorizedKeys.keys = [
...
];
};
My derivation looks like this:
with import <nixpkgs> {};
let setupGitRepo = name : (
stdenv.mkDerivation {
name = "setup-git-repo";
dontUnpack = true;
buildInputs = [
git
];
buildPhase = ''
git init --bare ${name}.git
mkdir -p ${name}.git/hooks
touch ${name}.git/hooks/post-receive
tee ${name}.git/hooks/post-receive <<EOF
GIT="/home/git/${name}.git"
WWW="/var/www/${name}"
TMP="/tmp/${name}"
ENV="/home/git/${name}.env"
rm -rf \$TMP
mkdir -p \$TMP
git --work-tree=\$TMP --git-dir=\$GIT checkout -f
cp -a \$ENV/.* \$TMP
cd \$TMP
# install deps, etc
rm -rf \$WWW/*
mv \$TMP/* \$WWW/*
# restart services here
rm -rf \$TMP
EOF
'';
installPhase = ''
mkdir -p $out/var/www/${name}
mkdir -p $out/home/git
mkdir -p $out/home/git/${name}.env
chown -R git:users ${name}.git # doesn't work
chown -R git:users $out/var/www/${name} # doesn't work
cp -R ${name}.git $out/home/git
'';
});
in setupGitRepo "test"
My problem is that I can't use chown git:users to set ownership during the build or install phase, I assume because of isolation in the build process.
Is there a way to overcome this?
I wonder if the issue I'm getting is a signal I'm missing something obvious or misusing tools.
Writing in /home from a package could be another code smell: I'm doing it to have a nicer url to add to git git remote add server git#mydomain:test.git)
Thanks
EDIT: I'll upload my nixos configuration here with the suggestions from David: https://github.com/framp/nixos-configs/blob/master/painpoint
In general, as far as I know, Linux has no way to give ownership of a file to another user, unless you are root.
Secondly, I question why you would even want files in the output of your derivation to be owned by a different user. As described in the Nix manual, after building a derivation, Nix sets the modes of all files to 0444 or 0555, meaning they will be readable by all users on the system, and some will also be executable by all users on the system. There should be no additional permissions that you need for your users.
Remember that the output of a Nix derivation is supposed to be an immutable thing that never changes.

retain the data inside database using docker

I have created database.sql file and added to a folder inside the container so that it can be used by the application.I want that my data should remain persistent even when my container is removed.I tried using volume.But how to add .sql file and make that file consistent.
sudo docker -v /datadir sqldb
here, sqldb is database image name and datadir is mount folder.
Dockerfile of sqldb:
FROM ubuntu:latest
RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get -y install mysql-client mysql-server curl
ADD ./my.cnf /etc/mysql/my.cnf
RUN sed -i -e"s/^bind-address\s*=\s*127.0.0.1/bind-address = 0.0.0.0/" /etc/mysql/my.cnf
ADD database.sql /var/db/database.sql
ENV user root
ENV password password
ENV url file:/var/db/database.sql
ENV right WRITE
ADD ./start-database.sh /usr/local/bin/start-database.sh
RUN chmod +x /usr/local/bin/start-database.sh
EXPOSE 3306
CMD ["/usr/local/bin/start-database.sh"]
start-database.sh file
#!/bin/bash
# This script starts the database server.
echo "Creating user $user for databases loaded from $url"
a
# Import database if provided via 'docker run --env url="http:/ex.org/db.sql"'
echo "Adding data into MySQL"
/usr/sbin/mysqld &
sleep 5
curl $url -o import.sql
# Fixing some phpmysqladmin export problems
sed -ri.bak 's/-- Database: (.*?)/CREATE DATABASE \1;\nUSE \1;/g' import.sql
# Fixing some mysqldump export problems (when run without --databases switch)
# This is not tested so far
# if grep -q "CREATE DATABASE" import.sql; then :; else sed -ri.bak 's/-- MySQL dump/CREATE DATABASE `database_1`;\nUSE `database_1`;\n-- MySQL dump/g' import.sql; fi
mysql --default-character-set=utf8 < import.sql
rm import.sql
mysqladmin shutdown
echo "finished"
# Now the provided user credentials are added
/usr/sbin/mysqld &
sleep 5
echo "Creating user"
echo "CREATE USER '$user' IDENTIFIED BY '$password'" | mysql --default-character-set=utf8
echo "REVOKE ALL PRIVILEGES ON *.* FROM '$user'#'%'; FLUSH PRIVILEGES" | mysql --default-character-set=utf8
echo "GRANT SELECT ON *.* TO '$user'#'%'; FLUSH PRIVILEGES" | mysql --default-character-set=utf8
echo "finished"
if [ "$right" = "WRITE" ]; then
echo "adding write access"
echo "GRANT ALL PRIVILEGES ON *.* TO '$user'#'%' WITH GRANT OPTION; FLUSH PRIVILEGES" | mysql --default-character-set=utf8
fi
# And we restart the server to go operational
mysqladmin shutdown
cp /var/db/database.sql /var/lib/docker/volumes/mysqlvol/database.sql
echo "Starting MySQL Server"
/usr/sbin/mysqld
To keep data persisted even when container is removed please use volumes.
For more info look at:
SO: How to deal with persistent storage (e.g. databases) in docker
Docker Webinar Q&A: Persistent Storage & Docker
Manage data in containers
Compose and volumes
Volume plugins
I hope that helps. If not please ask.

Permissions issue with Docker volumes

I want to start using Docker for my Rails development, so I'm trying to put together a skeleton I can use for all my apps.
However, I've run into an issue with Docker volumes and permissions.
I want to bind-mount the app's directory into the container, so that any changes get propagated to the container without the need to re-build it.
But if I define it as a volume in my docker-compose.yml, I can't chown the directory anymore. I need the directory and all its contents to be owned by the app user in order for Passenger to work correctly.
I read that it's not possible to chown volumes.
Do you know of any workarounds?
You could try to run the chown instead by CMD. Like:
CMD chown -R app:app /home/app/webapp && /sbin/my_init
RUN statements are only executed during built time of your image. But there you do not have mounted volumes, yet.
CMD instead is executed during runtime of the container when the volumes are mounted already. So that would have the effect that you want.
I use a hacky solution to manage this problem for my development environments. To use on development environments only!
The images I use for development environments contain a script that looks like this:
#!/bin/sh
# In usr/local/bin/change-dev-id
# Change the "dev" user UID and GID
# Retrieve new ids to apply
NEWUID=$1
NEWGID=$1
if [ $# -eq 2 ]
then
NEWGID=$2
elif [ $# -ne 1 ]
then
echo "Usage: change-dev-id NEWUID [NEWGID]"
echo "If NEWGID is not provided, its value will be the same as NEWUID"
exit 1
fi
# Retrieve old ids
OLDUID=`id -u dev`
OLDGID=`id -g dev`
# Change the user ids
usermod -u ${NEWUID} dev
groupmod -g ${NEWGID} dev
# Change the files ownership
find / -not \( -path /proc -prune \) -user ${OLDUID} -exec chown -h ${NEWUID} {} \;
find / -not \( -path /proc -prune \) -group ${OLDGID} -exec chgrp -h ${NEWGID} {} \;
echo "UID and GID changed from ${OLDUID}:${OLDGID} to ${NEWUID}:${NEWGID} for \"dev\""
exit 0
In the Dockerfile of my base image, I add it and make it executable:
# Add a script to modify the dev user UID / GID
COPY change-dev-id /usr/local/bin/change-dev-id
RUN chmod +x /usr/local/bin/change-dev-id
Then, instead of changing the owner of the mounted folder, I change the ID of the container's user to match the ID of my user on the host machine:
# In the Dockerfile of the project's development environment, change the ID of
# the user that must own the files in the volume so that it match the ID of
# the user on the host
RUN change-dev-id 1234
This is very hacky but it can be very convenient. I can own the files of the project on my machine while the user in the container has the correct permissions too.
You can update the code of the script to use the username you want (mine is always "dev") or modify it to pass the username as an argument.

Resources