Prevent GPG key sharing in VSCode Remote Container - docker

The following paragraph in the official docs describes how to enable GPG key sharing (from localhost to Remote Container) in VSCode (https://code.visualstudio.com/docs/remote/containers#_sharing-gpg-keys).
The instructions (for Linux) simply state that to share GPG keys, install gnupg2 locally and in the container. But what if I have gnupg2 installed but I don't want to have the keys shared? From what I can tell, VSCode execs post-startup commands within the container where the key sharing gets done, e.g.:
Copy /home/karlschriek/.gnupg/pubring.kbx to /home/vscode/.gnupg/pubring.kbx
Copy /home/karlschriek/.gnupg/trustdb.gpg to /home/vscode/.gnupg/trustdb.gpg
...
I have not been able to find a setting that will prevent this. It is also, presumably, using the same gpg-agent as the localhost. I would like to prevent this.

Since this behavior does not seem configurable, I would
move those files in a custom folder (outside ~/.gnupg, and reference it with the GNUPGHOME environment variable
write a remote VSCode starter script which would launch VSCode after a local export GNUPGHOME=""
That way, VSCode would search for gnupg files to share in the default ~/.gnupg folder, which is not used in your case.
It is a simple workaround, not an exact solution, but one simple enough to test.

Just to add another detail which might help someone: notice that you have to install gnupg locally and in the container. I was running into issues with a gnupg command failing during startup and was able to solve it by removing gnupg in my dockerfile (it had been installed automatically).

Related

Why are dependencies installed via an ENTRYPOINT and tini?

I have a question regarding an implementation of a Dockerfile on dask-docker.
FROM continuumio/miniconda3:4.8.2
RUN conda install --yes \
-c conda-forge \
python==3.8 \
[...]
&& rm -rf /opt/conda/pkgs
COPY prepare.sh /usr/bin/prepare.sh
RUN mkdir /opt/app
ENTRYPOINT ["tini", "-g", "--", "/usr/bin/prepare.sh"]
prepare.sh is just facilitating installation of additional packages via conda, pip and apt.
There are two things I don't get about that:
Why not just place those instructions in the Dockerfile? Possibly indirectly (modularized) by COPYing dedicated files (requirements.txt, environment.yaml, ...)
Why execute this via tini? At the end it does exec "$#" where one can start a scheduler or worker - that's more what I associate with tini.
This way everytime you run the container from the built image you have to repeat the installation process!?
Maybe I'm overthinking it but it seems rather unusual - but maybe that's a Dockerfile pattern with good reasons for it.
optional bonus questions for Dask insiders:
why copy prepare.sh to /usr/bin (instead of f.x. to /tmp)?
What purpose serves the created directory /opt/app?
It really depends on the nature and usage of the files being installed by the entry point script. In general, I like to break this down into a few categories:
Local files that are subject to frequent changes on the host system, and will be rolled into the final image for production release. This is for things like the source code for an application that is under development and needs to be tested in the container. You want these to be copied into the runtime every time the image is rebuilt. Use a COPY in the Dockerfile.
Files from other places that change frequently and/or are specific to the deployment environment. This is stuff like secrets from a Hashicorp vault, network settings, server configurations, etc.... that will probably be downloaded into the container all the time, even when it goes into production. The entry point script should download these, and it should decide which files to get and from where based on environment variables that are injected by the host.
libraries, executable programs (under /bin, /usr/local/bin, etc...), and things that specifically should not change except during a planned upgrade. Usually anything that is installed using pip, maven or some other program that does dependency management, and anything installed with apt-get or equivalent. These files should not be installed from the Dockerfile or from the entrypoint script. Much, much better is to build your base image with all of the dependencies already installed, and then use that image as the FROM source for further development. This has a number of advantages: it ensures a stable, centrally located starting platform that everyone can use for development and testing (it forces uniformity where it counts); it prevents you from hammering on the servers that host those libraries (constantly re-downloading all of those libraries from pypy.org is really bad form... someone has to pay for that bandwidth); it makes the build faster; and if you have a separate security team, this might help reduce the number of files they need to scan.
You are probably looking at #3, but I'm including all three since I think it's a helpful way to categorize things.

Copying an exe and composing it as a docker image and making it platform independent

I need to create a Docker image, which when run, should install an exe in the specified directory that mentioned in my docker file.
Basically, I need ImageMagick application. The docker file created should be platform independent, say if I ran in windows it should use windows distribution, Linux means Linux distribution. It would be great if it adds an environmental variable in the system. I browsed for the solution, but I couldn't find an appropriate solution.
I know it's a bit late but maybe someone (like me) was still searching.
I ended up using a java-imagemagick docker version from https://hub.docker.com/r/cpaitsupport/java-imagemagick/dockerfile
You can run docker pull cpaitsupport/java-imagemagick to get this docker image to your docker machine.
Now comes the tricky part: as I needed to run the imagemagick inside a docker container for my main app. Now you can COPY the files from cpaitsupport/java-imagemagick to your custom container. Example :
COPY --from=cpaitsupport/java-imagemagick:latest . ./some/dir/imagemagick
now you should have the docker file structure for your custom app and also under some/dir/imagemagick/ the file structure for imagemagick. Here are all ImageMagick relative files (also convert, magic, the libraries etc).
Now if you want to use ImageMagick in your Code you need to setup some ENV variables to your docker container with the "new" path to the ImageMagick directory. Example:
IM4JAVA_TOOLPATH=/some/dir/imagemagick/usr/bin \
LD_LIBRARY_PATH=/usr/lib:/some/dir/imagemagick/usr/lib \
MAGICK_CONFIGURE_PATH=/some/dir/imagemagick/etc/ImageMagick-7 \
MAGICK_CODER_MODULE_PATH=/some/dir/imagemagick/usr/lib/ImageMagick-7.0.5/modules-Q16HDRI/coders \
MAGICK_HOME=/some/dir/imagemagick/usr
Now delete (in Java Code) ProcessStarter.setGlobalSearchPath(imPath); this part if it is set. So you can use the IM4JAVA_TOOLPATH.
Now the ConvertCmd cmd = new ConvertCmd(); and cmd.run(op); should be working.
Maybe it's not the best way but worked for me and I was struggling a lot.
Hope this helps!
Feel free to correct or add additional info.
You can install (extract files) to the external hosting system using docker mount or volumes -
however you can not change system setting by updating environment variables of the hosting system from inside of the containers.

Move file downloaded in Dockerfile to harddrive

First off, I really lack a lot of knowledge regarding Docker itself and its structure. I know that it'd be way more beneficial to learn the basics first, but I do require this to work in order to move on to other things for now.
So within a Dockerfile I installed wget & used it to download a file from a website, authentification & download are successful. However, when I later try move said file it can't be found, and it doesn't show up using e.g explorer either (path was specified)
I thought it might have something to do with RUN & how it executes the wget command; I read that the Id can be used to copy it to harddrive, but how'd I do that within a Dockerfile?
RUN wget -P ./path/to/somewhere http://something.com/file.txt --username xyz --password bluh
ADD ./path/to/somewhere/file.txt /mainDirectory
Download is shown & log-in is successful, but as I mentioned I am having trouble using that file later on as it's not to be located on the harddrive. Probably a basic error, but I'd really appreciate some input that might lead to a solution.
Obviously the error is produced when trying to execute ADD as there is no file to move. I am trying to find a way to mount a volume in order to store it, but so far in vain.
Edit:
Though the question is similiar to the "move to harddrive" one, I am searching for ways to get the id of the container created within the Dockerfile in order to move it; while the thread provides such answers, I haven't had any luck using them within the Dockerfile itself.
Short answer is that it's not possible.
The Dockerfile builds an image, which you can run as a short-lived container. During the build, you don't have (write) access to the host and its file system. Which kinda makes sense, since you want to build an immutable image from which to run ephemeral containers.
What you can do is run a container, and mount a path from your host as a volume into the container. This is the only way how you can share files between the host and a container.
Here is an example how you could do this with the sherylynn/wget image:
docker run -v /path/on/host:/path/in/container sherylynn/wget wget -O /path/in/container/file http://my.url.com
The -v HOST:CONTAINER parameter allows you to specify a path on the host that is mounted inside the container at a specified location.
For wget, I would prefer -O over -P when downloading a single file, since it makes it really explicit where your download ends up. When you point -O to the location of the volume, the downloaded file ends up on the host system (in the folder you mounted).
Since I have no idea what your image or your environment looks like, you might need to tweak one or two things to work well with your own image. As a general recommendation: For basic commands like wget or curl, you can find pre-made images on Docker Hub. This can be quite useful when you need to set up a Continuous Integration pipeline or so, where you want to use wget or curl but can't execute it directly.
Use wget -O instead of -P for specific file download
for e.g.,
RUN wget -O /tmp/new_file.txt http://something.com --username xyz --password bluh/new_file.txt
Thanks

docker install with tcp enabled 0.0.0.0

Wondering if anyone knows how to install with tcp enabled? Something like below? I
yum install docker --tcp-enabled --host 0.0.0.0
I understand I can go and manual change OPTIONS in /etc/sysconfig/docker.
I am trying to provision a server with a fresh docker install through scripts and I do not want to log onto the box and make these changes, everytime a new version comes out. I also understand I can just use a script with sed/awk to do this, But just wondering if easier way, without having to maintain a script.
My preferred solution is to use /etc/docker/daemon.json. This will let you add options to just about any install.
Note that I don't believe this will unset options that were defined on the command line, it's designed to let you use both. Those command line options are defined by your startup script, which from your description is systemd on a RedHat/CentOS environment with /etc/sysconfig/docker injected environment variables (you won't see this on other platforms like Debian). So if you need to remove an option, you'll still need to update your /etc/sysconfig/docker.

Setting up OpenProject on OpenShift

I'm trying to install OpenProject on OpenShift but I'm having difficulties in understanding the process. I've managed to create an OpenShift application and SSH into the domain, however I don't have permissions to download the zip file / create the folder as in the instructions.
I have to mention that my GIT/Ruby/Openshift knowledge is very limited.
Has anyone tried this before? Can you tell me if it's possible and how?
Thanks!
You'll want to ssh into your gear
$ rhc ssh -a
Then cd into the data dir
$ cd $OPENSHIFT_DATA_DIR
Wget your file
$ wget https://github.com/opf/openproject/archive/2.4.0.zip
I, despite this issue being so old, I would like to share my efforts. I also wanted to get OpenProject running on OpenShift. My way was to first get an initial POD running with root permissions to set things up with the WebUI and then run the individual services without root permissions.
Details can be found in this Github repository:
https://github.com/jngrb/openproject-openshift

Resources