Add SSL certificate to store in docker - docker

I am trying to create a simple docker image that runs .NET Core APIs. The problem is, my environment is behind a proxy with self-signed certificate i.e. not trusted :(
Following is my docker file
## runtime:3.1 does not support certoc or openssl or powershell which forced me to change image to nanoserver-1809
#FROM mcr.microsoft.com/dotnet/core/runtime:3.1
FROM mcr.microsoft.com/dotnet/core/runtime:3.1-nanoserver-1809
ARG source
ARG BUILD_ENV=development
# Option - 1
# ADD z-scaler-certificate.crt /usr/local/share/ca-certificates/z-scaler-certificate.crt
# RUN certoc -addstore root /usr/local/share/ca-certificates/z-scaler-certificate.crt
# Option - 2
# RUN powershell IMPORT-CERTIFICATE -FilePath /usr/z-scaler-certificate.crt -CertStoreLocation 'Cert:\\LocalMachine\Root'
# Option - 3
# RUN CERT_DIR=(openssl version -d | cut -f2 -d \")/certs; cp /usr/z-scaler-certificate.crt $CERT_DIR; update-ca-certificates; fi
# Option - 4
ADD z-scaler-certificate.crt /container/cert/path
RUN update-ca-certificates
WORKDIR /app
COPY ${source:-bin/Debug/netcoreapp3.1} .
ENTRYPOINT ["dotnet", "Webjob.dll"]
I tried almost all possible options I could try from internet but all fails with the same error -
executor failed running [cmd /S /C update-ca-certificates]: unable to find user ContainerUser: invalid argument
I need help in figuring out what is that I am doing wrong that the certificate is not being added to the store?

In order to execute admin tasks you should use ContainerAdministrator user
FROM mcr.microsoft.com/dotnet/core/runtime:3.1-nanoserver-1809
ARG source
ARG BUILD_ENV=development
USER ContainerAdministrator
...

When working with containers, I'd recommend keeping to standard Linux tech unless there is a good reason. This is the most standard option and will work on the MS Debian images:
COPY z-scaler-certificate.crt /usr/local/share/certificates/z-scaler-certificate.crt
RUN update-ca-certificates
I am assuming here that your CRT file is a valid root certificate.

Related

How can I use a several line command in a Dockerfile in order to create a file within the resulting Image

I'm following installation instructions for RedhawkSDR, which rely on having a Centos7 OS. Since my machine uses Ubuntu 22.04, I'm creating a Docker container to run Centos7 then installing RedhawkSDR in that.
One of the RedhawkSDR installation instructions is to create a file with the following command:
cat<<EOF|sed 's#LDIR#'`pwd`'#g'|sudo tee /etc/yum.repos.d/redhawk.repo
[redhawk]
name=REDHAWK Repository
baseurl=file://LDIR/
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhawk
EOF
How do I get a Dockerfile to execute this command when creating an image?
(Also, although I can see that this command creates the file /etc/yum.repos.d/redhawk.repo, which consists of the lines from [redhawk] to gpgkey=...., I have no idea how to parse this command and understand exactly why it does that...)
Using the text editor of your choice, create the file on your local system. Remove the word sudo from it; give it an additional first line #!/bin/sh. Make it executable using chmod +x create-redhawk-repo.
Now it is an ordinary shell script, and in your Dockerfile you can just RUN it.
COPY create-redhawk-repo ./
RUN ./create-redhawk-repo
But! If you look at what the script actually does, it just writes a file into /etc/yum.repos.d with a LDIR placeholder replaced with some other directory. The filesystem layout inside a Docker image is fixed, and there's no particular reason to use environment variables or build arguments to hold filesystem paths most of the time. You could use a fixed path in the file
[redhawk]
name=REDHAWK Repository
baseurl=file:///redhawk-yum/
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhawk
and in your Dockerfile, just COPY that file in as-is, and make sure the downloaded package archive is in that directory. Adapting the installation instructions:
ARG redhawk_version=3.0.1
RUN wget https://github.com/RedhawkSDR/redhawk/releases/download/$redhawk_version/\
redhawk-yum-$redhawk_version-el7-x86_64.tar.gz \
&& tar xzf redhawk-yum-$redhawk_version-el7-x86_64.tar.gz \
&& rm redhawk-yum-$redhawk_version-el7-x86_64.tar.gz \
&& mv redhawk-yum-$redhawk_version-el7-x86_64 redhawk-yum \
&& rpm -i redhawk-yum/redhawk-release*.rpm
COPY redhawk.repo /etc/yum.repos.d/
Remember that, in a Dockerfile, you are root unless you've switched to another USER (and in that case you can use USER root to switch back); you do not need generally sudo in Docker at all, and can just delete sudo where it appears in these instructions.
How do I get a Dockerfile to execute this command when creating an image?
Just use printf and run this command as single line:
FROM image_name:image_tag
ARG LDIR="/default/folder/if/argument/not/set"
# if container has sudo command and default user is not root
# you should choose this variant
RUN printf '[redhawk]\nname=REDHAWK Repository\nbaseurl=file://%s/\nenabled=1\ngpgcheck=1\ngpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhawk\n' "$LDIR" | sudo tee /etc/yum.repos.d/redhawk.repo
# if default container user is root this command without piping may be used
RUN printf '[redhawk]\nname=REDHAWK Repository\nbaseurl=file://%s/\nenabled=1\ngpgcheck=1\ngpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhawk\n' "$LDIR" > /etc/yum.repos.d/redhawk.repo
Where LDIR is an argument and docker build process should be run like:
docker build ./ --build-arg LDIR=`pwd`

Building image with Dockerfile is not using existing layers?

I'm quite new to Dockerfiles and not sure whats going on here. I'm using Podman and building a new image with a Dockerfile to set some additional permissions using a base image from Docker Hub. According to the layers (#8) in this base image it seems that the variabele SDKMAN_DIR is being set to /home/javaUser/.sdkman.
/bin/sh -c export SDKMAN_DIR="/home/javaUser/.sdkman" && curl -s "https://get.sdkman.io" | bash
However, when I build a new image based on the newly created image with the Dockerfile, start a new container, exec into container and execute the command echo $SDKMAN_DIR, the result is empty. I was expecting that my variant of the image had the same variabels set.
Can anyone tell me what I'm missing here?
bash-5.1$ echo $SDKMAN_DIR
Below you'll find the Dockerfile I'm using.
FROM docker.io/itext/dito-sdk:2.4.5
# Make available to rootless users
USER root
RUN chmod g+x /home/javaUser/.sdkman/bin/sdkman-init.sh
# Become rootless user
USER 2000
EXPOSE 8080
ENTRYPOINT /bin/bash -c "source /home/javaUser/.sdkman/bin/sdkman-init.sh && kotlin /opt/dito/startup-prepare.main.kts && java -jar /opt/dito/dito-sdk-docker.jar server /etc/opt/dito/config.yml"
Thanks in advance.
Edit
Somehow I got it to work by passing in an ENV. Still I'm not exactly sure why the variable is not available without explicitly defining an ENV variable. Does this has to do with not being the root but 2000 user after an exec into the container?
Edit
The variable is not available at runtime because of the fact that the variabele is being declared in the RUN step, and therefore only limited to that scope. Thanks to #DavidMaze for clarifying.

how do I add root certificate and keep only in docker build time?

the question has 2 parts, the 1st part: how to add root certificate? is simple and we can have reference from like How do I add a CA root certificate inside a docker image?
the 2nd part, which is what I actually want to ask, is: how to keep the root certificate only in docker build time?
maybe we can use buildctl and RUN --mount=type=secret; but it cannot cover all cases.
say I would like to pass sites with self-signed certificate like:
RUN curl https://x01.self-signed-site/obj01
RUN npm install --registry https://x02.self-signed-site/npm
RUN pip install -i https://x03.self-signed-site/pypi/simple
RUN mvn install
...
thus, we need to config certificate for each tool:
(prepare certificate and prepare .npmrc, .curlrc, ...)
(for, curl, npm, pip, we can use env vars; but we cannot guarantee we can use this way for other tools)
therefore, we need to download self-signed certificate into image and also modify some files to apply the cert config. how to keep the change only in build time (no persistent layer in final image)?
we resolved this problem by using docker save and docker load; but currently, docker load does not work as we expect (see also how to keep layers when do `docker load`)
anyway, below is our solution in pseudo-code:
docker save -o out.tar <image>
mkdir contents && cd contents
tar xf ../out.tar
open manifest.json, get config <hash>.json as config.json
remove target layers in:
- config.json[history]
- config.json[rootfs][diff_ids]
- manifest.json[0][Layers]
remove layer tarballs (get layer_hashes from maniefst.josn[0][Layers]):
- <layer_hash>/*
fill gap between missing layers:
- <layer_hash_next>/json[parent] = <layer_hash_prev>
tar cf ../new.tar *
docker rmi <image>
docker load -i ../new.tar
ref: https://github.com/stallpool/track-network-traffic/blob/main/bin/docker_image_cleanup.py

ibmcom/mq docker image backward compatibility issue

I was using docker image ibmcom/mq .
My compose file was:
FROM ibmcom/mq
USER root
# create another client user
# default is app without password
RUN useradd user1 -G mqclient && \
echo user1:passwd | chpasswd
Then suddenly it was stopped working when I build latest image again.
Error is :
useradd: group 'mqclient' does not exist
ERROR: Service 'mq' failed to build: The command '/bin/sh -c useradd user1 -G mqclient && echo user1:passwd | chpasswd' returned a non-zero code: 6
Now compose is not working with latest image(9.1.5.0-r1) version but works with old version e.g. 9.1.4.0-r1
Can anyone suggest what is the alternative
From 9.1.5 the container does not use OS based users or groups. This is to conform to cloud best practices. Instead a file based system is being used. This is so that when you roll-out the container in a cloud into production you can switch to an LDAP based system.
The 9.1.5 container uses htpasswd, with the relevant file in /etc/mqm/
For development, if you are not going to create new users, then you can use the 9.1.5 container. If you want to create new users, then you can use 9.1.4 or earlier, or use htpasswd with bcrypt to create the users.

Docker Logs Issue : Logs are not created or displayed in Tomcat's logs folder in docker container

We are using Docker container and created a Dockerfile. Inside this container we deployed war file using tomcat image
and we can see tomcat logs at console but console logs is not updating
after sending a request to tomcat via URL.
Also we can not see any log file inside tomcat logs folder
Can anyone help me out that how we can see tomcat logs like localhost.logs/catalina.logs/manager.logs etc
MY Dockerfile is :-
FROM openjdk:6-jre
ENV CATALINA_HOME /usr/local/tomcat
ENV PATH $CATALINA_HOME/bin:$PATH
COPY tomcat $CATALINA_HOME
ADD newui.war $CATALINA_HOME/webapps
CMD $CATALINA_HOME/bin/startup.sh && tail -F $CATALINA_HOME/logs/catalina.out
EXPOSE 8080
Used below script to build
$ docker build -t tomcat .
and below used to run tomcat
$ docker run -p 8080:8080 tomcat
Here are a few things wrong with your dockerfile:
You mention that you need java 6, and yet the line FROM java as of this writing is set to use java:8.
You need to replace the FROM line with FROM java:6-jre or as suggested by the official page: FROM openjdk:6-jre if in 2018 you still need java 6, which is dangerous. I would also strongly suggest to use at least FROM tomcat:7 which should be able to run java 6 applets but will include some bug fixes including support for longer Diffie-Hellman primes for HTTPS (if you are serious about your app's security).
Copt tomcat $CATALINA_HOME you either miss-typed the line to SO, or your image should not build at all. It should be COPY tomcat $CATALINA_HOME
Given that you are using the COPY command there is no need to use RUN mkdir -p prior to this, since the COPY command will automatically create all the required folders.
CMD $CATALINA_HOME/bin/startup.sh && tail -f $CATALINA_HOME/logs/catalina.out
First the tail -f part: since you are looking to tail a log file which might be created and recreated during the server's operation instead of following the FD you should be following the path by doing tail -F (capital F)
startup.sh && tail - tail will never start until startup.sh exits. A better approach is to do tail -F $CATALINA_HOME/logs/catalina.out & inside your startup.sh right before you start your tomcat server. That way tail will be running in the background.
Regardless this is a somewhat dangerous approach and you risk zombie processes because bash does not manage its children processes and neither does docker. I would recommend to use supervisord or something similar.
(From https://docs.docker.com/engine/admin/multi-service_container/)
FROM ubuntu:latest
RUN apt-get update && apt-get install -y supervisor
RUN mkdir -p /var/log/supervisor
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
COPY my_first_process my_first_process
COPY my_second_process my_second_process
CMD ["/usr/bin/supervisord"]
Note: this dockerfile sample omits a few of the best practices, e.g. removing the apt cache in the same run command as doing the apt-get update.
Personal favorite is the phusion/baseimage, but it is harder to setup since you'll need to install everything including the java into the image.
If with all of these modifications you still have no luck in seeing the console update, then you'll need to also post the contents of your startup.sh file or other tomcat related configurations.
P.S.: it might be a good idea to do RUN mkdir -p $CATALINA_HOME/logs just to make sure that the logs folder exists for tomcat to write to.
P.P.S.: the java base image is actually using openjdk instead of the oracle one. Just thought I'd point it out
You should check tomcat logging settings. The default logging.properties in the JRE specifies a ConsoleHandler that routes logging to System.err. The default conf/logging.properties in Apache Tomcat also adds several FileHandlers that write to files.
Example logging.properties file to be placed in $CATALINA_BASE/conf:
handlers = 1catalina.org.apache.juli.FileHandler, \
2localhost.org.apache.juli.FileHandler, \
3manager.org.apache.juli.FileHandler, \
java.util.logging.ConsoleHandler
.handlers = 1catalina.org.apache.juli.FileHandler, java.util.logging.ConsoleHandler
############################################################
# Handler specific properties.
# Describes specific configuration info for Handlers.
############################################################
1catalina.org.apache.juli.FileHandler.level = FINE
1catalina.org.apache.juli.FileHandler.directory = ${catalina.base}/logs
1catalina.org.apache.juli.FileHandler.prefix = catalina.
2localhost.org.apache.juli.FileHandler.level = FINE
2localhost.org.apache.juli.FileHandler.directory = ${catalina.base}/logs
2localhost.org.apache.juli.FileHandler.prefix = localhost.
3manager.org.apache.juli.FileHandler.level = FINE
3manager.org.apache.juli.FileHandler.directory = ${catalina.base}/logs
3manager.org.apache.juli.FileHandler.prefix = manager.
3manager.org.apache.juli.FileHandler.bufferSize = 16384
java.util.logging.ConsoleHandler.level = FINE
java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter
############################################################
# Facility specific properties.
# Provides extra control for each logger.
############################################################
org.apache.catalina.core.ContainerBase.[Catalina].[localhost].level = INFO
org.apache.catalina.core.ContainerBase.[Catalina].[localhost].handlers = \
2localhost.org.apache.juli.FileHandler
org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/manager].level = INFO
org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/manager].handlers = \
3manager.org.apache.juli.FileHandler
# For example, set the org.apache.catalina.util.LifecycleBase logger to log
# each component that extends LifecycleBase changing state:
#org.apache.catalina.util.LifecycleBase.level = FINE
Example logging.properties for the servlet-examples web application to be placed in WEB-INF/classes inside the web application:
handlers = org.apache.juli.FileHandler, java.util.logging.ConsoleHandler
############################################################
# Handler specific properties.
# Describes specific configuration info for Handlers.
############################################################
org.apache.juli.FileHandler.level = FINE
org.apache.juli.FileHandler.directory = ${catalina.base}/logs
org.apache.juli.FileHandler.prefix = servlet-examples.
java.util.logging.ConsoleHandler.level = FINE
java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter
More info at https://tomcat.apache.org/tomcat-6.0-doc/logging.html
we can not see the logs in Docker container until unless we mount it.
To build the Dockerfile:-
docker build -t tomcat
To run the Dockerfile Image:-
docker run -p 8080:8080 tomcat
To copy the logs of tomcat present in docker container to mounted container :-
Run this cmd to mount the container:
1stpath : 2ndpath
docker run \\-d \\-p 8085:8085 \\-v /usr/local/tomcat/logs:/usr/local/tomcat/logs \tomcat
or simply
docker run \\-d \\-v /usr/local/tomcat/logs:/usr/local/tomcat/logs \tomcat
1st:-/usr/local/tomcat/logs: path of root dir: where we want to copy
the logs or destination
2nd:- /usr/local/tomcat/logs: path of tomcat/logs folder present in
docker container
tomcat:-name of image
need to change the port if it is busy
now the container is get mount
to get the list of container run : docker ps -a
now get the container id of latest created container:
docker exec -it < mycontainer > bash
then we can see the logs by
cd /usr/local/tomcat/logs
usr/local/tomcat/logs# less Log Name Here
this to Copy any folder in docker container on root:-
docker cp <containerId>:/file/path/within/container /host/path/target

Resources