add new GPG key detailed in Docker installation - docker

completely new to 'Docker', wondered what this means in installation instructions. https://docs.docker.com/engine/installation/linux/ubuntulinux/
4/ Add the new GPG key.
sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 -- recv-keys 58118E89F3A912897C070ADBF76221572C52609D

This is part of the SecureApt (strong crypto to validate downloaded packages)
apt-key is a program that is used to manage a keyring of gpg keys for secure apt.
gpg is the tool used in secure apt to sign files and check their signatures
That works... if the key server is up (see issue 13555, and "Key server times out while installing docker on Ubuntu 14.04")
The pool hkp://p80.pool.sks-keyservers.net is a subset of servers which are also available on port 80. It's more friendly to firewalls and companies.
For some reason, most docker documentations and tutorials are giving that p80 pool for installation, without further explanation.
The thing is, this is a small pool of servers and they fail very often in practice. (The fact that most tutorials are sending people to that small pool probably doesn't help either).

Related

Prevent GPG key sharing in VSCode Remote Container

The following paragraph in the official docs describes how to enable GPG key sharing (from localhost to Remote Container) in VSCode (https://code.visualstudio.com/docs/remote/containers#_sharing-gpg-keys).
The instructions (for Linux) simply state that to share GPG keys, install gnupg2 locally and in the container. But what if I have gnupg2 installed but I don't want to have the keys shared? From what I can tell, VSCode execs post-startup commands within the container where the key sharing gets done, e.g.:
Copy /home/karlschriek/.gnupg/pubring.kbx to /home/vscode/.gnupg/pubring.kbx
Copy /home/karlschriek/.gnupg/trustdb.gpg to /home/vscode/.gnupg/trustdb.gpg
...
I have not been able to find a setting that will prevent this. It is also, presumably, using the same gpg-agent as the localhost. I would like to prevent this.
Since this behavior does not seem configurable, I would
move those files in a custom folder (outside ~/.gnupg, and reference it with the GNUPGHOME environment variable
write a remote VSCode starter script which would launch VSCode after a local export GNUPGHOME=""
That way, VSCode would search for gnupg files to share in the default ~/.gnupg folder, which is not used in your case.
It is a simple workaround, not an exact solution, but one simple enough to test.
Just to add another detail which might help someone: notice that you have to install gnupg locally and in the container. I was running into issues with a gnupg command failing during startup and was able to solve it by removing gnupg in my dockerfile (it had been installed automatically).

How can I update my root certificates in an Ubuntu 14.04 Dockerfile? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
Recently, my legacy Docker image stopped building because certain files refuse to download while building the image even though they download fine on my host system (and worked fine in the build before). This Dockerfile reproduces the problem:
FROM ubuntu:14.04
RUN apt-get update
RUN apt-get install -y ca-certificates
RUN update-ca-certificates
RUN apt-get update
RUN apt-get -y upgrade
#RUN apt-get install -y curl
#RUN curl -O https://gerrit.wikimedia.org/r/plugins/gitiles/mediawiki/extensions/SphinxSearch/+archive/refs/heads/REL1_24.tar.gz
RUN apt-get install -y wget
RUN wget https://gerrit.wikimedia.org/r/plugins/gitiles/mediawiki/extensions/SphinxSearch/+archive/refs/heads/REL1_24.tar.gz
Then, attempt to build the above Dockerfile with docker build .
When the wget approach (bottom) is used, I get the error:
ERROR: cannot verify gerrit.wikimedia.org's certificate, issued by '/C=US/O=Let\'s Encrypt/CN=R3':
Issued certificate has expired.
When I use the curl approach (top, commented out currently), I get the error:
curl: (60) SSL certificate problem: certificate has expired
I could bypass these issues by instructing wget and/or curl to ignore certificates, but I would prefer not to expose that security hole if at all possible to avoid. The top section is me flailing around trying to make sure the system's CA certificates are all up to date, but apparently what I'm doing isn't effective.
There are couple of ways I would do this without upgrading:
Before you try this make sure your ca-certificates.conf is in /etc/ca-certificates.conf.
In my Ubuntu 16.04 the ca-certificates.conf is in /etc/.
Add the next line BEFORE your "RUN update-ca-certificates" line.
For this to work, you MUST keep "RUN update-ca-certificates" line.
RUN sed '/DST_Root_CA_X3.crt/d' /etc/ca-certificates.conf > /tmp/cacerts.conf && mv /tmp/cacerts.conf /etc/ca-certificates.conf
RUN update-ca-certificates
This will remove the DST_Root_CA_X3.crt that expired on Sep 30 14:01:15 2021 GMT, assuming the expired DST Root CA certificate is the cause of your issue.
I would troubleshoot this manually as per this detailed guide: https://stackoverflow.com/a/69411107/1549092
If in your case step (1) doesn't work and that's not the issue, I would follow the guide in step (2) to identify the root cause of your issue. There could be another Root CA cert that has expired at different time.
NOTE: I can't be 100% sure what the root cause is in your case, unless you share that ca-cert bundle so I can test it. Or you can do it following step (2) above.
Meta: it's not clear this is a programming or development issue; you've already gotten close voters who think it isn't.
library/ubuntu:14.04 already contains ca-certificates 20170717~14.04.2 which is the last update issued for Trusty, so no, trying to update it doesn't help. That version DOES contain the ISRG root cert.
However, when accessing a host that uses the LetsEncrypt 'compatibility' chain from software based on OpenSSL, as both curl and wget in Ubuntu14.04 are, you not only need the ISRG root in the truststore but you also need either a recent version of OpenSSL code (at least 1.1.x and I believe specifically 1.1.1(I was wrong about the latter)) OR you need the now-obsolete DST root removed.
You could download a near-current OpenSSL (3.0.0 was just released and you don't want to mess with that) and build it yourself, then download curl and/or wget and build it/them to use that new OpenSSL. That's a good deal of work.
https://serverfault.com/questions/1079199/client-on-debian-9-erroneously-reports-expired-certificate-for-letsencrypt-issue has ways to remove DST root for Debian, which also applies to Ubuntu, and the sed method works for me in a docker build.
Alternatively, if you only need one file that doesn't change (and I'm guessing REL with a number shouldn't), why not just download on the host (where you apparently have modern code running) and copy into the container (or mount, if you care about the space)?

Docker EE installation: gpg: no valid OpenPGP data found

I tried to follow the instruction from Docker EE instruction.
https://docs.docker.com/ee/docker-ee/ubuntu/#set-up-the-repository
I met the problem with step five: Add Docker’s official GPG key using your customer Docker Engine - Enterprise repository URL.
curl -fsSL "${DOCKER_EE_URL}/ubuntu/gpg" | sudo apt-key add -
When I type this command, terminal returned
curl: (22) The requested URL returned error: 403
gpg: no valid OpenPGP data found.
I tried to use the browser open the url of "${DOCKER_EE_URL}/ubuntu/gpg", it also returns 403.
Then, I thought maybe my local environment is not clean, so I rent server from DigitalOcean, but it still returned the same message.
Could someone, please point me into the right direction? Thank you!
Update: I tried to use centos, it can't work, either.
I had the same issues this morning. For me the issue is now resolved.
Looks like it takes a couple of hours before the key is available (after requesting a trial license).

Jenkins Build rpm sign error

I am trying to copy my entire Jenkins configuration from RHEL 6.7 to RHEL 6.9 , On doing this everything looks good, but only one jenkins build is failing with below error
Enter pass phrase:
can't connect to `/usr/share/tomcat6/.gnupg/S.gpg-agent': No such file or directory
gpg: skipped "Credit": Bad passphrase
gpg: signing failed: Bad passphrase
Pass phrase check failed
The gpg private key 1.4.5 exists in jenkins configuration. Strange thing is , all other builds are able to sign rpm but only one build is failing
Anyone know how to fix it ?
RPM reads the passphrase uses getpass(3) and sends to gnupg through an additional file descriptor.
This creates two problems that need to be handled by automating signing mechanisms:
1) Some versions of rpm use getpass(3) which will use a tty (to disable echoing) and will require setting up a pseudo tty so that the automated password can be passed to RPM. Make sure you have the pty file system mounted, and expect(1) is one way to setup the pty from which the password can be read. There's another approach using /proc file descriptors that can be attempted on linux. The password is then sent to gnupg using --passphrase-fd.
2) gnupg2 can also handle persistent passwords in a separate agent process which is sometimes tricky to setup and keep running "automatically" because the detection depends on both the user/process id's. Your report seems to have an agent (which means gnupg2 or special gpg1 configuration) even though you mention 1.4.5 (which would seem to use gnupg1).
I see two separate issues in your log that need to be addressed.
can't connect to `/usr/share/tomcat6/.gnupg/S.gpg-agent': No such file
or directory
gpg-agent needs to be running as a daemon on the build host, where it will connect to a socket to listen for requests. Perhaps it is already running, but Jenkins is looking for its socket in the wrong directory because GNUPGHOME is set to some unusual value. Or perhaps gpg-agent isn't running and a new instance needs to be started.
Something like this script can be used to safely attach to an existing gpg-agent or spin up a new instance.
#!/bin/bash
# Decide whether to start gpg-agent daemon.
# Create necessary symbolic link in $GNUPGHOME/.gnupg/S.gpg-agent
SOCKET=S.gpg-agent
PIDOF=`pgrep gpg-agent`
RETVAL=$?
if [ "$RETVAL" -eq 1 ]; then
echo "Starting gpg-agent daemon."
eval `gpg-agent --daemon `
else
echo "Daemon gpg-agent already running."
fi
# Nasty way to find gpg-agent's socket file...
GPG_SOCKET_FILE=`find /tmp/gpg-* -name $SOCKET`
echo "Updating socket file link."
cp -fs $GPG_SOCKET_FILE $GNUPGHOME/.gnupg/S.gpg-agent
You may want to substitute pgrep for pidof, depending on your shell.
If you do end up starting a new agent, you can check to see that your keys have been loaded into it by running gpg --list-keys. If you don't see it listed, you'll need to add it using gpg --import. Follow the Jenkins docs for Using Credentials.
Resolving the gpg-agent issue may resolve your other issue, so check to see if your job is working before doing anything else.
References:
www.linuxquestions.org
gpg: skipped "Credit": Bad passphrase
The GPG key is protected by a passphrase. rpm is asking for this passphrase and expects it to be manually entered. Of course, Jenkins is running things non-interactively, so that's not going to be possible. We need some way to supply the passphrase to rpm so it can forward it along to gpg, or else we need to supply the passphrase to gpg directly via some sort of caching mechanism.
The Expect Method
By wrapping our rpm --addsign call in an expect script, we can use expect to enter the passphrase headlessly. This practice is fairly common. Assuming the following script named rpm_sign.exp:
#!/usr/bin/expect -f
set password [lindex $argv 0]
set files [lrange $argv 1 1]
spawn rpm --define --addsign $files
expect "Enter pass phrase:"
send -- "$password\r"
expect eof
This script can be used in a Jenkins shell step or pipeline as follows:
echo "Signing rpms ..."
sh "./rpm_sign.exp '${GPG_PASSPHRASE}' <list-of-files>"
Please note that, with some modifications, it is possible to specify which GPG identity you want to sign your RPMs with. This is done by passing --define {_gpg_name $YOUR_KEY_ID_HERE} as an argument to rpm inside the wrapper script. Note the TCL syntax. Since we're doing this on Jenkins, which may offer multiple sets of credentials, I assume this is relevant info.
References:
aaronhawley.livejournal.com
lists.fedoraproject.org
Other Methods
There are other solutions out there that may be more appropriate to your configuration. One such solution is to use RpmSignPlugin, which uses expect under the hood. Other solutions can be found in this posting on unix.stackexchange.com.

Using chef to set up apt repository

I am creating a recipe to install docker on Ubuntu 14.
How do I translate the command above to chef?}
sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
So using apt-repository resource:
apt_repository "???" do
uri ???
distribution ???
components ???
keyserver "hkp://p80.pool.sks-keyservers.net:80"
key "58118E89F3A912897C070ADBF76221572C52609D"
end
In contrast to #kaboom, I would recommend the (more modern) apt cookbook maintained by Chef, which also allows to set up repos. The syntax is basically the same.
This is, how I install Docker (on Debian):
apt_repository "docker" do
uri "https://apt.dockerproject.org/repo"
distribution "#{node['platform']}-#{node['lsb']['codename']}"
components ["main"]
key "https://apt.dockerproject.org/gpg"
end
EDIT: This is also available in Chef core without any cookbook as of 12.9.
EDIT2: Of course, you can also supply the keyserver and key_id parameters, if you want to specify it as such.

Resources