I'm trying to package a script to perform deployments with a git push in a Nix derivation.
The goal is having a git repo that on post-receive run some actions.
I want to package it so that I can just keep it alongside my configuration and ship it easily, minimising the amount of manual tasks to do.
I've already setup a git user:
users.users.git = {
isNormalUser = true;
shell = "/run/current-system/sw/bin/git-shell";
home = "/home/git";
openssh.authorizedKeys.keys = [
...
];
};
My derivation looks like this:
with import <nixpkgs> {};
let setupGitRepo = name : (
stdenv.mkDerivation {
name = "setup-git-repo";
dontUnpack = true;
buildInputs = [
git
];
buildPhase = ''
git init --bare ${name}.git
mkdir -p ${name}.git/hooks
touch ${name}.git/hooks/post-receive
tee ${name}.git/hooks/post-receive <<EOF
GIT="/home/git/${name}.git"
WWW="/var/www/${name}"
TMP="/tmp/${name}"
ENV="/home/git/${name}.env"
rm -rf \$TMP
mkdir -p \$TMP
git --work-tree=\$TMP --git-dir=\$GIT checkout -f
cp -a \$ENV/.* \$TMP
cd \$TMP
# install deps, etc
rm -rf \$WWW/*
mv \$TMP/* \$WWW/*
# restart services here
rm -rf \$TMP
EOF
'';
installPhase = ''
mkdir -p $out/var/www/${name}
mkdir -p $out/home/git
mkdir -p $out/home/git/${name}.env
chown -R git:users ${name}.git # doesn't work
chown -R git:users $out/var/www/${name} # doesn't work
cp -R ${name}.git $out/home/git
'';
});
in setupGitRepo "test"
My problem is that I can't use chown git:users to set ownership during the build or install phase, I assume because of isolation in the build process.
Is there a way to overcome this?
I wonder if the issue I'm getting is a signal I'm missing something obvious or misusing tools.
Writing in /home from a package could be another code smell: I'm doing it to have a nicer url to add to git git remote add server git#mydomain:test.git)
Thanks
EDIT: I'll upload my nixos configuration here with the suggestions from David: https://github.com/framp/nixos-configs/blob/master/painpoint
In general, as far as I know, Linux has no way to give ownership of a file to another user, unless you are root.
Secondly, I question why you would even want files in the output of your derivation to be owned by a different user. As described in the Nix manual, after building a derivation, Nix sets the modes of all files to 0444 or 0555, meaning they will be readable by all users on the system, and some will also be executable by all users on the system. There should be no additional permissions that you need for your users.
Remember that the output of a Nix derivation is supposed to be an immutable thing that never changes.
Related
I am trying to build a docker image that contains all of the necessary plugins/providers that several source repos need, so that when an automated terraform validate runs, it doesn't have to download gigs of redundant data.
However, I recognize that this provides for a maintenance problem in that someone may update a plugin version, and that would needed to be downloaded, since the docker image would not contain it.
The question
How can I pre-download all providers and plugins
Tell the CLI use those predownloaded plugins AND
also tell it that, if it doesn't find what it needs locally, then it can go to the network
Below are the relevant file:
.terraformrc
plugin_cache_dir = "$HOME/.terraform.d/plugin-cache"
disable_checkpoint = true
provider_installation {
filesystem_mirror {
path = "$HOME/.terraform/providers"
}
direct {
}
}
tflint (not relevant to this question, but it shows up in the below Dockerfile)
plugin "aws" {
enabled = true
version = "0.21.1"
source = "github.com/terraform-linters/tflint-ruleset-aws"
}
plugin "azurerm" {
enabled = true
version = "0.20.0"
source = "github.com/terraform-linters/tflint-ruleset-azurerm"
}
Dockerfile
FROM ghcr.io/terraform-linters/tflint-bundle AS base
LABEL name=tflint
RUN adduser -h /home/jenkins -s /bin/sh -u 1000 -D jenkins
RUN apk fix && apk --no-cache --update add git terraform openssh
ADD .terraformrc /home/jenkins/.terraformrc
RUN mkdir -p /home/jenkins/.terraform.d/plugin-cache/registry.terraform.io
ADD .tflint.hcl /home/jenkins/.tflint.hcl
WORKDIR /home/jenkins
RUN tflint --init
FROM base AS build
ARG SSH_PRIVATE_KEY
RUN mkdir /root/.ssh && \
echo "${SSH_PRIVATE_KEY}" > /root/.ssh/id_ed25519 && \
chmod 400 /root/.ssh/id_ed25519 && \
touch /root/.ssh/known_hosts && \
ssh-keyscan mygitrepo >> /root/.ssh/known_hosts
RUN git clone git#mygitrepo:wrai/tools/g.git
RUN git clone git#mygitrepo:myproject/a.git && \
git clone git#mygitrepo:myproject/b.git && \
git clone git#mygitrepo:myproject/c.git && \
git clone git#mygitrepo:myproject/d.git && \
git clone git#mygitrepo:myproject/e.git && \
git clone git#mygitrepo:myproject/f.git
RUN ls -1d */ | xargs -I {} find {} -name '*.tf' | xargs -n 1 dirname | sort -u | \
xargs -I {} -n 1 -P 20 terraform -chdir={} providers mirror /home/jenkins/.terraform.d
RUN chown -R jenkins:jenkins /home/jenkins
USER jenkins
FROM base AS a
COPY --from=build /home/jenkins/a/ /home/jenkins/a
RUN cd /home/jenkins/a && terraform init
FROM base AS b
COPY --from=build /home/jenkins/b/ /home/jenkins/b
RUN cd /home/jenkins/b && terraform init
FROM base AS c
COPY --from=build /home/jenkins/c/ /home/jenkins/c
RUN cd /home/jenkins/c && terraform init
FROM base AS azure_infrastructure
COPY --from=build /home/jenkins/d/ /home/jenkins/d
RUN cd /home/jenkins/d && terraform init
FROM base AS aws_infrastructure
COPY --from=build /home/jenkins/e/ /home/jenkins/e
RUN cd /home/jenkins/e && terraform init
Staging plugins:
This is most easily accomplished with the plugin cache dir setting in the CLI. This supersedes the old usage with the -plugin-dir=PATH argument for the init command. You could also set a filesystem mirror in each terraform block within the root module config, but this would be cumbersome for your use case. In your situation, you are already configuring this in your .terraformrc, but the filesystem_mirror path conflicts with the plugin_cache_dir. You would want to resolve that conflict, or perhaps remove the mirror block entirely.
Use staged plugins:
Since the setting is captured in the CLI configuration file within the Dockerfile, this would be automatically used in future commands.
Download additional plugins if necessary:
This is default behavior of the init command, and therefore requires no further actions on your part.
Side note:
The jenkins user typically is /sbin/nologin for shell and /var/lib/jenkins for home directory. If the purpose of this Docker image is for a Jenkins build agent, then you may want the jenkins user configuration to be more aligned with the standard.
TL;DR:
Configure the terraform plugin cache directory
Create directory with a single TF file containing required_providers block
Run terraform init from there
...
I've stumbled over this question as I tried to figure out the same thing.
I first tried leveraging an implied filesystem_mirror by running terraform providers mirror /usr/local/share/terraform/plugins in a directory containing only one terraform file containing the required_providers block. This works fine as long as you only use the versions of the providers you mirrored.
However, it's not possible to use a different version of a provider than the one you mirrored, because:
Terraform will scan all of the filesystem mirror directories to see which providers are placed there and automatically exclude all of those providers from the implied direct block.
I've found it to be a better solution to use a plugin cache directory instead. EDIT: You can prefetch the plugins by setting TF_PLUGIN_CACHE_DIR to some directory and then running terraform init in a directory that only declares the required_providers.
Previously overengineered stuff below:
The only hurdle left was that terraform providers mirror downloads the providers in the packed layout:
Packed layout: HOSTNAME/NAMESPACE/TYPE/terraform-provider-TYPE_VERSION_TARGET.zip is the distribution zip file obtained from the provider's origin registry.
while Terraform expects the plugin cache directory to use the unpacked layout:
Unpacked layout: HOSTNAME/NAMESPACE/TYPE/VERSION/TARGET is a directory containing the result of extracting the provider's distribution zip file.
So I converted the packed layout to the unpacked layout with the help of find and parallel:
find path/to/plugin-dir -name index.json -exec rm {} +`
find path/to/plugin-dir -name '*.json' | parallel --will-cite 'mkdir -p {//}/{/.}/linux_amd64; unzip {//}/*.zip -d {//}/{/.}/linux_amd64; rm {}; rm {//}/*.zip'
I want to do a binding (mount volume) between the jupyterLab and the VM. The only problem is the permissions, the folder that I create (/home/jovyan/work) always have a root permissions. I cannot create anything inside. I tried many solutions: In a docker file I tried this solution:
FROM my_image
RUN if [[ -d "$HOME/toto" ]] ; then echo OK ; else mkdir "$HOME/toto" ; fi
RUN chmod 777 $HOME/toto
==> always I got no permissions on the mounted folder
ARG NB_DIR=/tmp/lab_share
RUN mkdir "$NB_DIR"
RUN chown :100 "$NB_DIR"
RUN chmod g+rws "$NB_DIR"
RUN apt-get install acl
RUN setfacl -d -m g::rwx "$NB_DIR"
USER root
==> The problem here is the setfacl is not recognized in the container, I tried to install it just before, always not acceptable.
I tried to add a GRANT_SUDO in the jupyterHub service
extraEnv:
GRANT_SUDO: "yes"
==> The problem here is the extraEnv is not recognized
I tried to create a method in the jupyterHub_config file, just after the binding code:
notebook_dir = os.environ.get('DOCKER_NOTEBOOK_DIR') or '/home/jovyan/work'
c.DockerSpawner.notebook_dir = notebook_dir
c.DockerSpawner.volumes = { "/tmp/{username}" :notebook_dir}
c.DockerSpawner.remove_containers = True
##### CREATE HOOKER
def create_dir_hook(spawner):
username = spawner.user.name # get the username
logger.info(f"### USERNAME {username}")
volume_path = os.path.join('/tmp', username)
logger.info(f"### USERNAME {volume_path}")
if not os.path.exists(volume_path):
# create a directory with umask 0755
# hub and container user must have the same UID to be writeable
# still readable by other users on the system
logger.info(f"FOLDER FOR USER {username} not existing")
logger.info(f"CREATING A FOLDER IN {volume_path}")
os.mkdir(volume_path, 0o777)
# now do whatever you think your user needs
# ...
logger.info("coucou")
# attach the hook function to the spawner
c.Spawner.pre_spawn_hook = create_dir_hook
In this solutions the compilator don't read all the if bloc, even the else bloc. I found in the docker doc, that is a famous issue in docker, but I didn't find it's solution. Really I need your help please, if you have any solution I will be appreciate. Thanks a lot.
Through the next cmd I am trying to clone into the /home/impdev/Impala folder. Yet, docker clones into the root/Impala folder. Is there a way to remedy this?
RUN adduser --disabled-password --gecos '' impdev && \
echo 'impdev ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers && \
su - impdev && \
git clone https://gitbox.apache.org/repos/asf/impala.git ~/Impala
This is important for me as when you clone from an account ownership and group values are assigned to the cloned folder and its sub-folders.
You should use the Dockerfile USER and WORKDIR directives to control the current user context. This might look like
USER root
RUN adduser --disabled-password --gecos '' --no-create-home impdev
WORKDIR /impdev
USER impdev
RUN git clone https://gitbox.apache.org/repos/asf/impala.git
Remember that in many ways the Docker environment is different from a typical Linux environment: since an image generally only packages a single application and its immediate dependencies, it's not usually interesting to create multiple users or think about users as having home directories. You almost never use su or sudo since a container generally only does one thing and starts up as the user it needs to be to do it. Both commands are also difficult to use in scripts. (In your example, if you do su to the new user, it completes and returns to the original root-user context before the script moves on to the next command.)
I think I have a dilemma. I am trying to create a Dockerfile to reproduce a long and complicated installation process (of ROS) so that my students can get it running with less headache.
I am combining various scripts provided with manual steps that are documented. The manual steps often say to do "sudo" but I am told that doing sudo inside a Dockerfile is to be avoided. So I move those steps to before the USER command in the Dockerfile because I am told that those commands run as root. However as a result the files and directories created are owned by root and I believe subsequent steps are failing.
I have two choices I think: move the commands to after the USER command and include sudo or try to make the install scripts create directories and files of the right ownership. Of course a priori I dont know what files and directories are going to be created.
Here is my Dockerfile (actually one of many I have been experimenting with.) Also if you see any other things that need to be improved or fixed please let me know!
FROM ubuntu:16.04
# create non-root user
ENV USERNAME ros
RUN adduser --ingroup sudo --disabled-password --gecos "" --shell /bin/bash --home /home/$USERNAME $USERNAME
RUN bash -c 'echo $USERNAME:ros | chpasswd'
ENV HOME /home/$USERNAME
RUN apt-get update && apt-get install --assume-yes wget sudo && \
wget https://raw.githubusercontent.com/ROBOTIS-GIT/robotis_tools/master/install_ros_kinetic.sh && \
chmod 755 ./install_ros_kinetic.sh && \
bash ./install_ros_kinetic.sh
RUN apt-get install --assume-yes ros-kinetic-joy ros-kinetic-teleop-twist-joy ros-kinetic-teleop-twist-keyboard ros-kinetic-laser-proc ros-kinetic-rgbd-launch ros-kinetic-depthimage-to-laserscan ros-kinetic-rosserial-arduino ros-kinetic-rosserial-python ros-kinetic-rosserial-server ros-kinetic-rosserial-client ros-kinetic-rosserial-msgs ros-kinetic-amcl ros-kinetic-map-server ros-kinetic-move-base ros-kinetic-urdf ros-kinetic-xacro ros-kinetic-compressed-image-transport ros-kinetic-rqt-image-view ros-kinetic-gmapping ros-kinetic-navigation ros-kinetic-interactive-markers
USER $USERNAME
WORKDIR /home/$USERNAME
RUN cd /home/$USERNAME/catkin_ws/src/ && \
git clone https://github.com/ROBOTIS-GIT/turtlebot3_msgs.git && \
git clone https://github.com/ROBOTIS-GIT/turtlebot3.git && \
git clone https://github.com/ROBOTIS-GIT/turtlebot3_simulations.git
# add catkin env
RUN echo 'source /opt/ros/kinetic/setup.bash' >> /home/$USERNAME/.bashrc
RUN echo 'source /home/ros/catkin_ws/devel/setup.bash' >> /home/$USERNAME/.bashrc
# RUN . /home/ros/.bashrc && \
# cd /home/$USERNAME/catkin_ws && \
# catkin_make
USER $USERNAME
ENTRYPOINT /bin/bash
Would be interesting for my own information to get why sudo should be avoided in containers.
Historically we use docker to automate build, test and deploy processes in our team and always tried to write Dockerfiles as close as possible to original process.
Lets say if you build in your host some app and launch some commands with sudo, some without, we managed to create exactly the same Dockerfiles. The positive feedback from this is that you are not obligated to write readme's on how to build the code anymore - you just supply Dockerfile and whenever someone wants to repeat all steps in non-container environment, he just follows (copy/pastes) commands from the file.
So my proposal is - in Dockerfile install packages first, then switch to user and proceed with all remaining steps, using sudo when necessary. You will have all artifacts owned by the user, not root.
UPD
Got the original discussion and this one. So it sounds like you choose the best approach based on your particular case and needs.
Docker by default saves passwords unencrypted on disk, encoded in base64. I want to securely store a login password using docker-credentials-pass keystore plugin to log in to my private registry.
https://github.com/docker/docker-credential-helpers/
I am stucked at this issue: https://github.com/docker/docker-credential-helpers/issues/102
I've tried everything the users comment and I couldn't find any documentation for docker and pass. I google some tutorials as well without success. I restarted docker multiple times when trying and it just doesn't work. I would appreciate some help if someone knows how to set it up.
Don't know if it's still relevant to you but this worked for us (rh7 system):
Generate a new gpg2 key with gpg2 --gen-key and select all the default answers (apart from name, mail and passphrase). The output you get should contain a row that looks something like this:
pub 2048R/A154BD21 2019-09-12
Take the part after the / and init your pass with pass init <after-slash-part>, so in this example pass init A154BD21.
Add the line "credsStore":"pass" to your ~/.docker/config.json so that it looks something like
{
"credsStore":"pass"
}
Make sure that the location of your docker-credential-pass file is in your $PATH environment variable.
Now try logging in. If it's not working, please describe a bit more in detail what you do and if you get any error messages, etc.
I went with a bash script like this that automates much of the process.
#!/bin/sh
# Sets up a docker credential helper so docker login credentials are not stored encoded in base64 plain text.
# Uses the pass secret service as the credentials store.
# If previously logged in w/o cred helper, docker logout <registry> under each user or remove ~/.docker/config.json.
# Tested on Ubuntu 18.04.5 LTS.
if ! [ $(id -u) = 0 ]; then
echo "This script must be run as root"
exit 1
fi
echo "Installing dependencies"
apt update && apt-get -y install gnupg2 pass rng-tools jq
# Check for later releases at https://github.com/docker/docker-credential-helpers/releases
version="v0.6.3"
archive="docker-credential-pass-$version-amd64.tar.gz"
url="https://github.com/docker/docker-credential-helpers/releases/download/$version/$archive"
# Download cred helper, unpack, make executable, and move it where Docker will find it.
wget $url \
&& tar -xf $archive \
&& chmod +x docker-credential-pass \
&& mv -f docker-credential-pass /usr/local/bin/
# Done with the archive
rm -f $archive
config_path=~/.docker
config_filename=$config_path/config.json
# Could assume config.json isn't there or overwrite regardless and not use jq (or sed etc.)
# echo '{ "credsStore": "pass" }' > $config_filename
if [ ! -f $config_filename ]
then
if [ ! -d $config_path ]
then
mkdir -p $config_path
fi
# Create default docker config file if it doesn't exist (never logged in etc.). Empty is fine currently.
cat > $config_filename <<EOL
{
}
EOL
echo "$config_filename created with defaults"
else
echo "$config_filename already exists"
fi
# Whether config is new or existing, read into variable for easier file redirection (cat > truncate timing)
config_json=`cat $config_filename`
if [ -z "$config_json" ]; then
# Empty file will prevent jq from working
$config_json="{}"
fi
# Update Docker config to set the credential store. Used sed before but messy / edge cases.
echo "$config_json" | jq --arg credsStore pass '. + {credsStore: $credsStore}' > $config_filename
# Output / verify contents
echo "$config_filename:"
cat $config_filename | jq
# Help with entropy to prevent gpg2 full key generation hang
# Feeds data from a random number generator to the kernel's random number entropy pool
rngd -r /dev/urandom
# To cleanup extras from multiple runs: gpg --delete-secret-key <key-id>; gpg --delete-key <key-id>
echo "Generating GPG key, accept defaults but consider key size to 2048, supply user info"
gpg2 --full-generate-key
echo "Adjusting permissions"
sudo chown -R $USER:$USER ~/.gnupg
sudo find ~/.gnupg -type d -exec chmod 700 {} \;
sudo find ~/.gnupg -type f -exec chmod 600 {} \;
# List keys
gpg2 -k
key=$(gpg2 --list-secret-keys | grep uid -B 1 | head -n 1 | sed 's/^ *//g')
echo "Initializing pass with key $key"
pass init $key
echo "Enter a password to add to the secure store"
pass insert docker-credential-helpers/docker-pass-initialized-check
# Just a verification. Don't need to show actual password, mask it.
echo "Password verification:"
pass show docker-credential-helpers/docker-pass-initialized-check | sed -e 's/\(.\)/\*/g'
echo "Docker credential password list (empty initially):"
docker-credential-pass list
echo "Done. Ready to test. Run: sudo docker login <registry>"
echo "Afterwards run: sudo docker-credential-pass list; sudo cat ~/.docker/config.json"
The docker-credentials-pass helper doesn't setup a pass-based password store - it expects an already functional password store, so I would advise you to first set that up before incorporating the credentials helper
Pass is a password manager that is essentially a bash script that automates encrypting/decrypting secrets using GnuPG. That means a working pass setup requires each of those tools to function: pass and gpg2. Optionally the password store can be a git repository, in which case you'll need git installed as well.
After downloading pass and setting up GnuPG, initialize the password store with your gpg id. From the docs (assuming "ZX2C4 Password Storage Key" is your gpg id):
$ pass init "ZX2C4 Password Storage Key"
mkdir: created directory ‘/home/zx2c4/.password-store’
Password store initialized for ZX2C4 Password Storage Key.
You should then be able to add passwords using the pass command, and if that works then enable the docker-credentials-pass helper as you've already done.
https://www.passwordstore.org/