pgpass file that uses variables? - psql

I'm working on a docker image that connects to postgres using psql.
My entrypoint:
psql analytics \
--host=${INPUT_HOST} \
--username=analytics \
--port=32648
If I run this I'm prompted for a password, I enter it and am able to connect. Great.
But if I try to make the password entry automatic I get an error:
psql analytics \
--host=${INPUT_HOST} \
--username=analytics \
--port=32648 \
--password=${INPUT_PASSWORD}
/usr/lib/postgresql/13/bin/psql: option '--password' doesn't allow an argument
Try "psql --help" for more information.
I found some docs on using .pgpass and this file which is to be added to a users home directory takes the form:
hostname:port:database:username:password
Now I'm going to have to do something like:
${INPUT_HOST}:5432:analytics:analytics:${INPUT_PASSWORD}
Then envsubst or sed on this file before adding to the image.
Open ended question, is there a better/more convenient way? ${INPUT_PASSWORD} comes from a docker secret. Is there anyway I can pass a password to my call to psql?

The best way is to use a connection string:
psql "password='${INPUT_PASSWORD}' dbname=analytics host='${INPUT_HOST}' user=analytics port=32648"

Related

Use ~/.ssh/config hosts in docker context

I'm trying to find a way to use hosts defined in my user's ~/.ssh/config file to define a docker context.
My ~/.ssh/config file contains:
Host my-server
HostName 10.10.10.10
User remoteuser
IdentityFile /home/me/.ssh/id_rsa-mykey.pub
IdentitiesOnly yes
I'd like to create a docker context as follow:
docker context create \
--docker host=ssh://my-server \
--description="remoteuser on 10.10.10.10" \
my-server
Issuing the docker --context my-server ps command throws an error stating:
... please make sure the URL is valid ... Could not resolve hostname my-server: Name or service not known
For what I could figure out, the docker command uses the sudo mechanism to elevate its privileges. Thus I guess it searches /root/.ssh/config, since ssh doesn't use the $HOME variable.
I tried to symlink the user's config as the root one:
sudo ln -s /home/user/.ssh/config /root/.ssh/config
But this throws another error:
... please make sure the URL is valid ... Bad owner or permissions on /home/user/.ssh/config
The same happens when creating the /root/.ssh/config file simply containing:
Include /home/*/.ssh/config
Does someone have an idea on how to have my user's .ssh/config file parsed by ssh when issued via sudo ?
Thank you.
Have you confirmed your (probably correct) theory that docker is running as root, by just directly copying your user's ~/.ssh/config contents into /root/.ssh/config? If that doesn't work, you're back to square one...
Otherwise, either the symlink or the Include ought to work just fine (a symlink inherits the permissions of the file it is pointing at).
Another possibility is that your permissions actually are bad -- don't forget you have to change the permissions on both ~/.ssh AND ~/.ssh/config.
chmod 700 /home/user/.ssh
chmod 600 /home/user/.ssh/config
And maybe even:
chmod 700 /root/.ssh
chmod 600 /root/.ssh/config

How to fetch war file from Jfrog artifactory inside dockerfile ? getting HTTP 401 error

I have created a declarative jenkins pipeline and one of it's stages is as follows:
stage('Docker Image'){
steps{
bat 'docker build -t HMT/demo-application:%BUILD_NUMBER% --no-cache -f Dockerfile .'
}
}
This is the docker file:
FROM tomcat:alpine
RUN wget -O /usr/local/tomcat/webapps/launchstation04.war http://localhost:8082/artifactory/demoArtifactory/com/demo/0.0.1-SNAPSHOT/demo-0.0.1-SNAPSHOT.war
EXPOSE 9100
CMD /usr/local/tomcat/bin/cataline.bat run
I am getting the below error.:
[91m/bin/sh:
01:33:28 [0mThe command '/bin/sh -c wget -O /usr/local/tomcat/webapps/launchstation04.war http://localhost:8082/artifactory/demoArtifactory/com/demo/0.0.1-SNAPSHOT/demo-0.0.1-SNAPSHOT.war' returned a non-zero code: 127
UPDATE:
I have updated the command to
RUN wget -O /usr/local/tomcat/webapps/launchstation04.war -U jenkinsuser:Learning#% http://localhost:8082/artifactory/demoArtifactory/com/demo/0.0.1-SNAPSHOT/demo-0.0.1-20200823.053346-18.war
There is no problem in my command.Jfrog artifactory was unable to authorize this action.So I added username and password details but it still didn't work.
Error:
wget: server returned error: HTTP/1.1 401 Unauthorized
It didnt work after modifiying the password policy to unsupported.But it worked when I allowed anonymous access.
How to provide access using credentials.
Need more clarification on your question. Not sure where you are using curl command.
Image tomcat:alpine doesn't contains curl command. Unless you install it manually.
bash-4.4# type curl
bash: type: curl: not found
bash-4.4#
If your ask is regarding the sh -c option, if the script is invoked through CMD option, yes it will use sh. Instead you can give a try with ENTRYPOINT.
You can provide username & password via command line:
wget --user user --password pass
Using curl :
curl -u username:password -O
But void using special characters:
Change your password to another once in: [a-z][A-Z][0-9]
Try an API Key instead of password, I have a feeling that "#" may be throwing you off. Quotes can help there too or separating the password with -p
Also look at the request logs for whether the entry comes as 401 for the user, or anonymous/unauthenticated
Lastly, see if you can cURL from outside the image and then ADD the file in, as that will remove any external factors that may vary from the host (where I assume the command works)

Running Docker on Google Cloud Instance with data in gcsfuse-mounted Bucket

I am trying to run a Docker container to analyze data in a Google Cloud Bucket.
I have been able to successfully mount the Bucket using gcsfuse, and I tested that I could do things like create and delete files within the Bucket.
In order to be able to install other programs (and mount the bucket), I installed Docker (and didn't use the Docker-optimized instance option). If I run Docker in interactive mode (without mounting a drive), it looks like it is working OK.
However, if I try to run Docker in interactive mode with the mounted drive (which is the gcsfuse-mounted Bucket), I get an error message:
user#instance:~/bucket-name/subfolder$ docker run -it -v /home/user/bucket-name:/mnt/bucket-name gcr.io/deepvariant-docker/deepvariant
docker: Error response from daemon: error while creating mount source path '/home/user/bucket-name': mkdir /home/user/bucket-name: file exists.
I hope that I am close to having this working: does anybody have any ideas about a relatively simple fix for this error message?
BTW, I realize that there are other ways to run DeepVariant on Google Cloud, but I am trying to makes things as similar as possible to what I am doing on AWS (plus, I may need to do some extra troubleshooting for analysis of one of my files).
Thank you very much for your help!
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
FYI, this is how I mounted the Bucket:
#mount directory: https://github.com/GoogleCloudPlatform/gcsfuse/blob/master/docs/installing.md
export GCSFUSE_REPO=gcsfuse-`lsb_release -c -s`
echo "deb http://packages.cloud.google.com/apt $GCSFUSE_REPO main" | sudo tee /etc/apt/sources.list.d/gcsfuse.list
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo apt-get update
sudo apt-get -y install gcsfuse
#restart and mount directory: https://cloud.google.com/storage/docs/gcs-fuse
#NOTE: please make sure you are in your home directory (I encounter issues if I try to mount from /mnt)
mkdir [bucket-name]
gcsfuse -o allow_other --file-mode 777 --dir-mode 777 [bucket-name] ./[bucket-name]
and this is how I installed Docker:
#install Docker for Debian: https://docs.docker.com/install/linux/docker-ce/debian/
sudo apt-get update
sudo apt-get -y install \
apt-transport-https \
ca-certificates \
curl \
gnupg2 \
software-properties-common
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/debian \
$(lsb_release -cs) \
stable"
sudo apt-get update
sudo apt-get -y --allow-unauthenticated install docker-ce docker-ce-cli containerd.io
#fix Docker sock issue: https://stackoverflow.com/questions/47854463/got-permission-denied-while-trying-to-connect-to-the-docker-daemon-socket-at-uni
sudo usermod -a -G docker [user]
#have to restart after this
For anyone experiencing a similar error / issue - here is what worked for me. Steps I took:
First unmount the disk if it's already mounted: sudo umount /mounted_folder
Remount the disk using the below command, listing the credentials file to be used explicitly
sudo GOOGLE_APPLICATION_CREDENTIALS=/home/user/credentials/example-asdf21b0af7.json gcsfuse -o allow_other bucket_name /mounted_folder
Should now be connected successfully without further errors :)
NOTE: This command needs to be run everytime after restarting the computer / VM. Formatting this into fstab could probably be done so one does not need to manually execute these steps upon each restart.
EXPLANATION: What I did here was explicitly specifying the credentials via a credentials JSON for the user / service account with appropriate access (Not explained here on how to get this but should be googl-able) and referring to that json in the GOOGLE_APPLICATION_CREDENTIALS environment variable option, as suggested by this answer: https://stackoverflow.com/a/39047673/10002593. The need for this environment variable option is likely due to gcsfuse not registering the same level of access as the activated acount in gcloud config for some reason.
I think I figured out at least a partial solution to my problem:
As mentioned in this tutorial, you also need to run gcloud auth configure-docker.
I found you also needed to exit and restart your instance, but this strictly solved the original error message for this post.
I think got a strange message, but perhaps that is more about the specific container. So, I ran another test:
docker run -it -v /home/user/bucket-name:/mnt/bucket-name cwarden45/dnaseq-dependencies
This time, I got an error message about storage space on the instance (to be able to download and run the Docker container). So, I went back and created a new instance with a larger local hard drive:
1) From the Google Cloud Console, I selected "Compute Instance" and "VM instances"
2) I clicked "create instance" (similar to before)
3) I select "change" under "boot disk"
4) I set size to 300 GB instead of 10 GB (currently, towards bottom-right, under "Size (GB)")
Similar to before, I choose 8 vCPUs for the "Machine type", I selected "Allow full access to all Cloud APIs" under "Identity and API access", and I checked boxes for both "Allow HTTP traffic" and "Allow HTTPS traffic" (under "Firewall").
I am not selecting "Deploy a container image to this VM instance," which I believe is how I got Docker installed with "sudo" to be able to install gcsfuse.
I also have to call this a "parital" solution because this allows me to run the Docker container successfully in interactive mode, but the mounted bucket appears empty within Docker.
For another project, I noticed that executables could work if I installed them on the local hard drive under /opt, but not if I tried to install them on my bucket (in order to save the installation time for those programs each time). On AWS, I believe I needed to use EFS storage instead of S3 storage to do something similar, but I will keep learning more about using the Google Cloud Bucket for mounted storage / analysis.
Also, it is a different issue, but I noticed that I could fix an issue with running exectuable files from the bucket from changing the command from gcsfuse [bucket-name] ./[bucket-name] to gcsfuse --file-mode 777 --dir-mode 777 [bucket-name] ./[bucket-name] (and I changed the example code accordingly)
I noticed more recently that the set of commands above is no longer sufficient to be able to have a functional directory (I can't add or edit files, for example).
Based upon this discussion, I thought that I needed to add the -o allow_other parameter.
However, if that is all I do, I get the following error message
fusermount: option allow_other only allowed if 'user_allow_other' is set in /etc/fuse.conf
I can resolve that error message if I uncomment the corresponding line in that file. However, that still doesn't resolve having the right file permissions in the mounted directory.
So, I then tried editing my /etc/fstab file, by adding the following entry
[bucket-name] /home/[username]/[bucket-name] gcsfuse rw,allow_other,file_mode=777,dir_mode=777
I am also accordingly editing the content at the top (for whatever seems like it might help).
Also, please note that this was not a Docker-specific issue. This was necessary to essentially do anything within the bucket. Plus, I haven't actually solved this new problem.
For example, I still can't create files as root, after changing to the superuser via sudo su - (as described here)

How to use docker-credential-pass to login to a private registry?

Docker by default saves passwords unencrypted on disk, encoded in base64. I want to securely store a login password using docker-credentials-pass keystore plugin to log in to my private registry.
https://github.com/docker/docker-credential-helpers/
I am stucked at this issue: https://github.com/docker/docker-credential-helpers/issues/102
I've tried everything the users comment and I couldn't find any documentation for docker and pass. I google some tutorials as well without success. I restarted docker multiple times when trying and it just doesn't work. I would appreciate some help if someone knows how to set it up.
Don't know if it's still relevant to you but this worked for us (rh7 system):
Generate a new gpg2 key with gpg2 --gen-key and select all the default answers (apart from name, mail and passphrase). The output you get should contain a row that looks something like this:
pub 2048R/A154BD21 2019-09-12
Take the part after the / and init your pass with pass init <after-slash-part>, so in this example pass init A154BD21.
Add the line "credsStore":"pass" to your ~/.docker/config.json so that it looks something like
{
"credsStore":"pass"
}
Make sure that the location of your docker-credential-pass file is in your $PATH environment variable.
Now try logging in. If it's not working, please describe a bit more in detail what you do and if you get any error messages, etc.
I went with a bash script like this that automates much of the process.
#!/bin/sh
# Sets up a docker credential helper so docker login credentials are not stored encoded in base64 plain text.
# Uses the pass secret service as the credentials store.
# If previously logged in w/o cred helper, docker logout <registry> under each user or remove ~/.docker/config.json.
# Tested on Ubuntu 18.04.5 LTS.
if ! [ $(id -u) = 0 ]; then
echo "This script must be run as root"
exit 1
fi
echo "Installing dependencies"
apt update && apt-get -y install gnupg2 pass rng-tools jq
# Check for later releases at https://github.com/docker/docker-credential-helpers/releases
version="v0.6.3"
archive="docker-credential-pass-$version-amd64.tar.gz"
url="https://github.com/docker/docker-credential-helpers/releases/download/$version/$archive"
# Download cred helper, unpack, make executable, and move it where Docker will find it.
wget $url \
&& tar -xf $archive \
&& chmod +x docker-credential-pass \
&& mv -f docker-credential-pass /usr/local/bin/
# Done with the archive
rm -f $archive
config_path=~/.docker
config_filename=$config_path/config.json
# Could assume config.json isn't there or overwrite regardless and not use jq (or sed etc.)
# echo '{ "credsStore": "pass" }' > $config_filename
if [ ! -f $config_filename ]
then
if [ ! -d $config_path ]
then
mkdir -p $config_path
fi
# Create default docker config file if it doesn't exist (never logged in etc.). Empty is fine currently.
cat > $config_filename <<EOL
{
}
EOL
echo "$config_filename created with defaults"
else
echo "$config_filename already exists"
fi
# Whether config is new or existing, read into variable for easier file redirection (cat > truncate timing)
config_json=`cat $config_filename`
if [ -z "$config_json" ]; then
# Empty file will prevent jq from working
$config_json="{}"
fi
# Update Docker config to set the credential store. Used sed before but messy / edge cases.
echo "$config_json" | jq --arg credsStore pass '. + {credsStore: $credsStore}' > $config_filename
# Output / verify contents
echo "$config_filename:"
cat $config_filename | jq
# Help with entropy to prevent gpg2 full key generation hang
# Feeds data from a random number generator to the kernel's random number entropy pool
rngd -r /dev/urandom
# To cleanup extras from multiple runs: gpg --delete-secret-key <key-id>; gpg --delete-key <key-id>
echo "Generating GPG key, accept defaults but consider key size to 2048, supply user info"
gpg2 --full-generate-key
echo "Adjusting permissions"
sudo chown -R $USER:$USER ~/.gnupg
sudo find ~/.gnupg -type d -exec chmod 700 {} \;
sudo find ~/.gnupg -type f -exec chmod 600 {} \;
# List keys
gpg2 -k
key=$(gpg2 --list-secret-keys | grep uid -B 1 | head -n 1 | sed 's/^ *//g')
echo "Initializing pass with key $key"
pass init $key
echo "Enter a password to add to the secure store"
pass insert docker-credential-helpers/docker-pass-initialized-check
# Just a verification. Don't need to show actual password, mask it.
echo "Password verification:"
pass show docker-credential-helpers/docker-pass-initialized-check | sed -e 's/\(.\)/\*/g'
echo "Docker credential password list (empty initially):"
docker-credential-pass list
echo "Done. Ready to test. Run: sudo docker login <registry>"
echo "Afterwards run: sudo docker-credential-pass list; sudo cat ~/.docker/config.json"
The docker-credentials-pass helper doesn't setup a pass-based password store - it expects an already functional password store, so I would advise you to first set that up before incorporating the credentials helper
Pass is a password manager that is essentially a bash script that automates encrypting/decrypting secrets using GnuPG. That means a working pass setup requires each of those tools to function: pass and gpg2. Optionally the password store can be a git repository, in which case you'll need git installed as well.
After downloading pass and setting up GnuPG, initialize the password store with your gpg id. From the docs (assuming "ZX2C4 Password Storage Key" is your gpg id):
$ pass init "ZX2C4 Password Storage Key"
mkdir: created directory ‘/home/zx2c4/.password-store’
Password store initialized for ZX2C4 Password Storage Key.
You should then be able to add passwords using the pass command, and if that works then enable the docker-credentials-pass helper as you've already done.
https://www.passwordstore.org/

TightVNC authentication failed, how to get encrypted password?

I'm trying to set RSelenium with docker following these instructions.
In "Remote control/debugging with Windows" I noticed something really strange. I installed TightVNC and set the passwords, but I got "Authentication Failed" while using these passwords. The guide said:
You will be asked for a password which is secret. This can be seen by reading the images Dockerfile:
and there is following code
RUN apt-get update -qqy \
&& apt-get -qqy install \
x11vnc \
&& rm -rf /var/lib/apt/lists/* \
&& mkdir -p ~/.vnc \
&& x11vnc -storepasswd secret ~/.vnc/passwd
I may be wrong but this seems to me like linux command. Despite of this I tried to paste in docker but I got
bash: apt-get: command not found
Does this guide need to be fixed or am I missing something? Right now I'm unable to connect and complete VNC debugging.
So you got few things wrong conceptually. The guide is absolutely fine. VNC has two parts VNC Server and VNC Viewer. When you installed VNC locally on your system you may have installed the server version which asks you for password. This password is for you system's VNC Server. With that a VNC Client name VNC viewer or something would have been installed also.
Now the docker image that you run hosts a VNC server on port 5901 and the password for connection is secret. So the only thing you need to do is Open VNC Viewer, Connect to :5901. When asked for password enter secret.
The dockerfile was shown to you for explaining how the author got the password and those commands have nothing to do with your system.

Resources