unlock key store docker-credential-pass - docker

I have set up docker-credential-pass using this guide:
https://github.com/docker/docker-credential-helpers/issues/102#issuecomment-388634452
my ~/.docker/config.json looks like:
{
"credsStore": "pass",
"credHelpers": {
"<account-id>.dkr.ecr.<region>.amazonaws.com": "ecr-login"
}
I can docker login/push/pull without any issue but when i build an image, it gives me an error:
Dockerfile:
FROM alpine:3.15
RUN apk add --no-cache ca-certificates curl
...
Error:
ERROR: failed to solve: error getting credentials - err: exit status 1, out: exit status 2: gpg: decryption failed: No secret key
If i unlock the key store with pass show docker-credential-helper/<key-store>/<username>. It prompts me for passphrase of my gpg key, after entering the passphrase the build passes without errors. I want to build this image inside a script, so is there a way to unlock the key store without the prompt ?
echo <passphrase> | pass show docker-credential-helper/<key-store>/<username> does not work.

Related

smartcard + configfile: how to avoid error "File name too long"?

I am trying to change my openconnect usage from command line to configfile.
I need to use a smartcard (StarSign CUT S, from Giesecke & Devrient GmbH) in order to access my VPN.
My current command line works fine and I can connect to the VPN:
$ openconnect \
--authgroup=<my_gateway> \
--protocol=gp \
--servercert <...> \
--disable-ipv6 \
--cafile <file.pem> \
<my_server_url> \
-c "pkcs11:model=XXXXXXXXXXXXXXXX;manufacturer=A.E.T.%20Europe%20B.V.;serial=XXXXXXXXXXXXXXXX;token=XXXXXXXXX;id=<...>;object=<...>;type=cert"
But when I try this configfile:
(All arguments are exactly the same!)
# vpn.config
authgroup = <my_gateway>
protocol = gp
servercert = <...>
disable-ipv6
cafile = <file.pem>
server = <my_server_url>
certificate = "pkcs11:model=XXXXXXXXXXXXXXXX;manufacturer=A.E.T.%20Europe%20B.V.;serial=XXXXXXXXXXXXXXXX;token=XXXXXXXXX;id=<...>;object=<...>;type=cert"
I get this error:
$ openconnect --config=vpn.config
Failed to open key/certificate file <...>: File name too long
Loading certificate failed. Aborting.
Failed to open HTTPS connection to <...>
Failed to complete authentication
Any idea on how to make it work? Or is it a bug in openconnect?
Thanks.
PS 1:
$ openconnect --version
OpenConnect version v9.01
Using GnuTLS 3.7.7. Features present: PKCS#11, HOTP software token, TOTP software token, System keys, DTLS, ESP
Supported protocols: anyconnect (default), nc, gp, pulse, f5, fortinet, array
Default vpnc-script (override with --script): /etc/vpnc/vpnc-script
PS 2: All commands executed as root.
Remove the double quotes from the configuration file:
# vpn.config
...
certificate = pkcs11:model=XXXXXXXXXXXXXXXX;manufacturer=A.E.T.%20Europe%20B.V.;serial=XXXXXXXXXXXXXXXX;token=XXXXXXXXX;id=<...>;object=<...>;type=cert

Retrieving RSA key from AWS Secrets Manager in CodeBuild corrupts key "invalid format"

During a CodeBuild run I am retrieving a rsa key from SecretsManager, which is the private key to use to access private sources in BitBucket. To do this I have copied the private key into a secret, then in my buildspec file I have the following snippet:
"env": {
"secrets-manager": {
"LOCAL_RSA_VAR": "name-of-secret"
}
},
In the install portion of the buildspec:
"install": {
"commands": [
"echo $LOCAL_RSA_VAR" > ~/.ssh/id_rsa,
"chmod 600 ~/.ssh/id_rsa",
"yarn install"
]
},
HOWEVER, this always ends up with an error:
Load key "/root/.ssh/id_rsa": invalid format
git#bitbucket.org: Permission denied (publickey).
fatal: Could not read from remote repository.
To determine if the key was wrong I tried uploading the rsa_id file into S3 and then download it from there and used it that way using these commands instead:
"install": {
"commands": [
"aws s3 cp s3://the-bucket-name/id_rsa ~/.ssh/id_rsa",
"chmod 600 ~/.ssh/id_rsa",
"yarn install"
]
},
This works fine.
So I guess the question is... Has anyone tried this and had better success? Is there something that I am not doing correctly that you can think of?
I have encountered the same issue.
Copying the id_rsa generated from the the command echo $LOCAL_RSA_VAR > ~/.ssh/id_rsa in S3 I have noticed that the new lines have not been preseved.
I have resolved putting the var env between double quote "":
echo "$LOCAL_RSA_VAR" > ~/.ssh/id_rsa
I was able to get an answer by diff'ing the output of the Env Var vs the File contents from the S3 file. ('cat' will not print out the content of a secret mgr env variable) It ends up content of the env var was altered by the 'echo' command.
The solution that ended up working for me was:
printenv LOCAL_RSA_VAR > ~/.ssh/id_rsa
this command didn't alter the content of the rsa and I was able to successfully use the certificate.
As a recap this is what I was successful doing:
Generate the new key
Used command "pbcopy < id_rsa" to get local key into clipboard
Pasted that into a new secret in Secret Manager
Used the first set of code above to have the buildspec file retrieve the content into a env variable and then the 'printenv' command above, in the install command portion of the buildspec file, to save that to the default ssh location.
Hope this helps anyone that runs into the same issue.
UPDATE: I found that this works if the RSA is stored as its own secret as one big block of text. If you try and add this as part of a json object, ie:
{
"some": "thing",
"rsa_id": "<the rsa key here>"
}
this does not seem to work. I found that the content is altered with spaces in place of the newline. This is what i found when running an 'od -ax' on each and comparing them:
own secret:
R I V A T E sp K E Y - - - - - nl
json secret:
R I V A T E sp K E Y - - - - - sp
I has the same issue, fixed it my NOT Copy-Paste my private key to SecretManager, but use AWS CLI to upload my private key to SecretManager:
aws secretsmanager put-secret-value --secret-id AWS_CODECOMMIT_SSH_PRIVATE --secret-string file://myprivatekey.pem
And then CloudBuild worked fine:
version: 0.2
env:
secrets-manager:
AWS_CODECOMMIT_SSH_ID : AWS_CODECOMMIT_SSH_ID
AWS_CODECOMMIT_SSH_PRIVATE: AWS_CODECOMMIT_SSH_PRIVATE
phases:
install:
commands:
- echo "Setup CodeCommit SSH Key"
- mkdir ~/.ssh/
- echo "$AWS_CODECOMMIT_SSH_PRIVATE" > ~/.ssh/id_rsa
- echo "Host git-codecommit.*.amazonaws.com" > ~/.ssh/config
- echo " User $AWS_CODECOMMIT_SSH_ID" >> ~/.ssh/config
- echo " IdentityFile ~/.ssh/id_rsa" >> ~/.ssh/config
- echo " StrictHostKeyChecking no" >> ~/.ssh/config
- chmod 600 ~/.ssh/id_rsa
- chmod 600 ~/.ssh/config

How to fix "public key for centos-release-7-9.2009.1.el7.centos.x86_64.rpm is not installed" in dockerfile?

I am trying out the centos 7 official base image with the following dockerfile:
FROM centos:7
RUN yum -y update && yum clean all
When building the image I get a warning about a missing public key:
warning: /var/cache/yum/x86_64/7/updates/packages/centos-release-7-9.2009.1.el7.centos.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID f4a80eb5: NOKEY
Public key for centos-release-7-9.2009.1.el7.centos.x86_64.rpm is not installed
--------------------------------------------------------------------------------
Total 26 MB/s | 40 MB 00:01
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
Importing GPG key 0xF4A80EB5:
Userid : "CentOS-7 Key (CentOS 7 Official Signing Key) <security#centos.org>"
Fingerprint: 6341 ab27 53d7 8a78 a7c2 7bb1 24c6 a8a7 f4a8 0eb5
Package : centos-release-7-9.2009.0.el7.centos.x86_64 (#CentOS)
From : /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
Running transaction check
Running transaction test
Transaction test succeeded
How can I get rid of tthat warning?
As you can see below the warning, the key is a file which is part of the base image. You just need to import it before yum throws that warning and does so itself.
The following Dockerfile will not throw a warning during build:
FROM centos:7
RUN rpmkeys --import file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 && \
yum -y update && \
yum clean all

fatal: could not read Username for 'https://github.com ': No such device or address - rubyonrails - aws

I have a rubyonrails website which function in such a way that when a user signup with his username it create a repo in that username in my github account. It is working flawlessly in heroku. When I switched to amazon web service I initially get
intializing git
sh: git: command not found
sh: line 0: cd: /home/webapp: No such file or directory
I overcome this error by adding a config file in .ebextensions like
commands:
01_mkdir_webapp_dir:
# use the test directive to create the directory
# if the mkdir command fails the rest of this directive is ignored
test: 'mkdir /home/webapp'
command: 'ls -la /home/webapp'
02_chown_webapp_dir:
command: 'chown webapp:webapp /home/webapp'
03_chmod_webapp_dir:
command: 'chmod 700 /home/webapp'
packages:
yum:
git: []
Then I have a new error log like
fatal: could not read Username for 'https://github.com ': No such device or address
As a side note when I run this script locally, and I signed up the site at localhost:3000 terminal prompt me to submit github username and password. Is that normal. Is this is the cause of the error fatal: could not read Username for 'https://github.com ': No such device or address.
But this code works flawlessly in heroku.
Full log is below.
https://drive.google.com/file/d/1yPYsS1ETHhrEoYFWHJxt4y52jHYRkooj/view?usp=sharing
I have these environment set in aws.
https://photos.app.goo.gl/GeQHXdUWUMuixTgNA
Try to create git config file directly
#/home/webapp/.gitconfig
[user]
name = soumjoyel
email = soumjoyel#gmail.com
using this script
files:
"/home/webapp/.gitconfig" :
mode: "000644"
owner: webapp
group: webapp
content: |
[user]
name = soumjoyel
email = soumjoyel#gmail.com

travis encrypt-file for maven deploy

On my computer:
travis login --org
Username: xxxxxx
Password: xxxxxx
Successfully logged in as xxxxxx!
travis encrypt-file codesigning.asc -r XXXXXX/XXXXXX
encrypting codesigning.asc for XXXXXX/XXXXXX
storing result as codesigning.asc.enc
storing secure env variables for decryption
Please add the following to your build script (before_install stage in your .travis.yml, for instance):
openssl aes-256-cbc -K $encrypted_abcd1234_key -iv $encrypted_abcd1234_iv -in codesigning.asc.enc -out codesigning.asc -d
Pro Tip: You can add it automatically by running with --add.
Make sure to add codesigning.asc.enc to the git repository.
Make sure not to add codesigning.asc to the git repository.
Commit all changes to your .travis.yml.
On my travis acount:
On my GitHub repository:
I paste the codesigning.asc.enc file in the test folder test/codesigning.asc.enc.
I add this shell script:
if [ "$TRAVIS_BRANCH" = 'master' ] && [ "$TRAVIS_PULL_REQUEST" == 'false' ]; then
echo "******** Starting gpg"
openssl aes-256-cbc -K "$encrypted_abcd1234_key" -iv "$encrypted_abcd1234_iv" -in test/codesigning.asc.enc -out test/codesigning.asc -d
gpg --fast-import test/codesigning.asc
fi
I have this error on my travis console:
bad decrypt
139864985556640:error:06065064:digital envelope routines:EVP_DecryptFinal_ex:bad decrypt:evp_enc.c:539:
gpg: invalid radix64 character FE skipped
gpg: invalid radix64 character C4 skipped
gpg: read_block: read error: invalid packet
gpg: import from `test/codesigning.asc' failed: invalid keyring
gpg: Total number processed: 0
OpenPGP (the cryptographic protocol implemented by gpg) and X.509 (the cryptographic protocol used by OpenSSL) are not compatible. You cannot import this key to GnuPG (you could to gpgsm which implements X.509, but this is not the normal gpg you want to use). You will have to stick with OpenSSL or GnuTLS to handle the key and encrypted messages for it.

Resources