Importing pfx certificate to selenoid firefox/edge image - docker

I need to import client .pxf certificate inside selenoid container. For chrome I use following Dockerfile:
ARG IMAGE_VERSION
FROM selenoid/chrome:${IMAGE_VERSION}
ARG USER_ID
ARG USER_PASSWORD
COPY policy.json /etc/opt/chrome/policies/managed/policy.json
COPY ${USER_ID}.pfx .
RUN mkdir -p $HOME/.pki/nssdb \
&& certutil -N -d $HOME/.pki/nssdb --empty-password \
&& pk12util -d sql:$HOME/.pki/nssdb -i ${USER_ID}.pfx -W ${USER_PASSWORD}
However the same approach doesn't work for microsoft edge and firefox. Any ideas what am I doing wrong?

Related

Cannot start keycloak in docker with letsencrypt certificates

I can run Keycloak with the following command
./bin/kc.sh start-dev \
--https-certificate-file=/etc/letsencrypt/live/$HOSTNAME/cert.pem \
--https-certificate-key-file=/etc/letsencrypt/live/$HOSTNAME/privkey.pem \
--hostname=$HOSTNAME
Works as expected
On the same computer, I try to run using Docker
docker run -p 80:8080 -p 443:8443 \
-v /etc/letsencrypt:/etc/letsencrypt:ro \
-e KEYCLOAK_ADMIN=admin \
-e KEYCLOAK_ADMIN_PASSWORD=change_me \
-e JAVA_OPTS_APPEND="$JAVA_OPTS_APPEND" \
quay.io/keycloak/keycloak:latest \
start-dev \
--https-certificate-file=/ect/letsencrypt/live/$HOSTNAME/cert.pem \
--https-certificate-key-file=/ect/letsencrypt/live/$HOSTNAME/privkey.pem \
--hostname=$HOSTNAME
It fails
2022-12-23 23:11:59,784 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Failed to start server in (development) mode
2022-12-23 23:11:59,785 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: /ect/letsencrypt/live/keycloak.fhir-poc.hcs.us.com/cert.pem
2022-12-23 23:11:59,787 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) Key material not provided to setup HTTPS. Please configure your keys/certificates.
Any suggestions besides a reverse proxy?
The problem is based on the directory linked structure of letsencrypt in linux and the permissions to access these files.
Letsencrypt linked directory structure works like:
/etc/letsencrypt/live/<your-domain/.pem -> /etc/letsencrypt/archive/<your-domain/.pem
The problem is the link from the live to the archive folder/file.
The permissions are mostly not correct.
A quick-fix is create a cert-mirror and copy the related files from /etc/letsencrypt/live/<your-domain/*.pem
to a new cert folder like /opt/cert
change permissions in /opt/cert to 777: chmod 777 -R /opt/certs
create a cron.monthly job in /etc/cron.monthly which copy the files to /opt/certs + change permissions correctly every month that your certs mirror always up-to-date
This will make your example working. Please keep in mind that permissions like 777 are let everyone access this file. You should use the correct permissions in productive environment.
I discovered the answer
letsencrypt certificates in the "live" folder are symlinks to the "archive" folder and I needed a custom docker image for keycloak to mount my certificates. So I followed the keycloak docs for creating a custom docker image and started a container with that image
Following
https://www.keycloak.org/server/containers
https://eff-certbot.readthedocs.io/en/stable/using.html#where-are-my-certificates
to build a custom image and change the cert permissions
Dockerfile
FROM quay.io/keycloak/keycloak:latest as builder
ENV KEYCLOAK_ADMIN=root
ENV KEYCLOAK_ADMIN_PASSWORD=change_me
WORKDIR /opt/keycloak
FROM quay.io/keycloak/keycloak:latest
COPY --from=builder /opt/keycloak/ /opt/keycloak/
COPY kc-export.json /opt/keycloak/kc-export.json
RUN /opt/keycloak/bin/kc.sh import --file /opt/keycloak/kc-export.json
VOLUME [ "/opt/keycloak/certs" ]
ENTRYPOINT ["/opt/keycloak/bin/kc.sh"]
Then start the container
docker run -p 8443:8443 \
-v /etc/letsencrypt:/etc/letsencrypt:ro \
-e KEYCLOAK_ADMIN=admin \
-e KEYCLOAK_ADMIN_PASSWORD=change_me \
-e JAVA_OPTS_APPEND="$JAVA_OPTS_APPEND" \
my-keycloak-image:latest \
start-dev \
--https-certificate-file=/opt/keycloak/certs/live/$HOSTNAME/cert.pem \
--https-certificate-key-file=/opt/keycloak/certs/live/$HOSTNAME/privkey.pem \
--hostname=$HOSTNAME

docker can't run vscodium

Mine is a bit of a peculiar situation, I created a dockerfile that "works" if not for some proiblems,
Here is a "working" version:
ARG IMGVERS=latest
FROM bensuperpc/tinycore:${IMGVERS}
LABEL maintainer "Vinnie Costante <****#gmail.com>"
ARG DOWNDIR=/tmp/download
ARG INSTDIR=/opt/vscodium
ARG REPOAPI="https://api.github.com/repos/VSCodium/vscodium/releases/latest"
ENV LANG=C.UTF-8 LC_ALL=C PATH="${PATH}:${INSTDIR}/bin/"
RUN tce-load -wic Xlibs nss gtk3 libasound libcups python3.9 tk8.6 \
&& rm -rf /tmp/tce/optional/*
RUN sudo ln -s /lib /lib64 \
&& sudo ln -s /usr/local/etc/fonts /etc/fonts \
&& sudo mkdir -p ${DOWNDIR} ${INSTDIR} \
&& sudo chown -R tc:staff ${DOWNDIR} ${INSTDIR}
#COPY VSCodium-linux-x64-1.57.1.tar.gz ${DOWNDIR}/
RUN wget http://192.168.43.6:8000/VSCodium-linux-x64-1.57.1.tar.gz -P ${DOWNDIR}
RUN tar xvf ${DOWNDIR}/VSCodium*.gz -C ${INSTDIR} \
&& rm -rf ${DOWNDIR}
CMD ["codium"]
The issues are these:
Starting the image with this command vscodium does not start, but entering the shell (adding /bin/ash to the end of the docker run) and then running codium instead vscodium starts. I tried many ways, even changing the entrypoint, the result is always the same. But if I try to add any other graphic program (like firefox) and replace the argument of the CMD instruction inside the dockerfile, everything works as it should.
docker run -it --rm \
--net=host \
--env="DISPLAY=unix${DISPLAY}" \
--workdir /home/tc \
--volume="$HOME/.Xauthority:/root/.Xauthority:rw" \
--name tc \
tinycodium
the last two versions of codium (1.58.0 and 1.58.1) don't work at all on docker but they start normally on the same distro not containerized. I tried installing other dependencies but nothing worked. Right now I don't know how to understand what's wrong with these two new versions.
I don't know how to set a volume to save codium data, I tried something like this --volume=/home/vinnie/docker:/home/tc but there are always problems with user/group permissions. I've also tried booting the container as user by adding it to the docker group but there's always a mess with permissions. If someone could explain me how to proceed, the directories I want to save are these:
/home/tc/.vscode-oss
/home/tc/.cache/mesa_shader_cache
/home/tc/.config/VSCodium
/home/tc/.config/glib-2.0/settings
/home/tc/.local/share
Try running codium --verbose and see if the container starts

docker-compose env vars not available when connecting via ssh

I have a local dev env which requires different hosts reachable via SSH so i set up a docker-compose.yml with some services:
services:
ssh1:
build:
context: ./.project/docker/ssh1
dockerfile: Dockerfile
environment:
MYSQL_USER: app1
MYSQL_PASSWORD: app1
The Dockerfile contains following contents:
FROM ubuntu:20.04
RUN export DEBIAN_FRONTEND=noninteractive \
&& ln -fs /usr/share/zoneinfo/Europe/Berlin /etc/localtime \
&& apt update \
&& apt upgrade -y \
&& apt install -y openssh-server rsync php \
&& mkdir /run/sshd/ \
&& ssh-keygen -A \
&& for key in $(ls /etc/ssh/ssh_host_* | grep -v pub); do echo "HostKey $key" >> /etc/ssh/sshd_config; done \
&& addgroup --gid 1000 app \
&& adduser --gecos "" --disabled-password --shell /bin/bash --uid 1000 --gid 1000 app \
&& mkdir -m 700 /home/app/.ssh/ \
&& chown app:app /home/app/.ssh/ \
&& rm -rf /var/lib/apt/lists/*
COPY --chown=app:app ssh1_rsa.pub /home/app/.ssh/authorized_keys
CMD ["/usr/sbin/sshd", "-D"]
EXPOSE 22
I can verify, that the environment variables are set in the container
$ docker-compose exec ssh1 printenv | grep MYSQL
MYSQL_USER=app1
MYSQL_PASSWORD=app1
docker inspect project_ssh1_1 also shows the ENV variables.
But when i connect from another container to ssh1 via ssh, my environment variables are not set.
Why are my environment variables not set when i ssh into the container?
I also appreciate any in-depth input on how env vars are set in the container via docker and how env vars are inherited from processes or the "OS".
Solution edit:
The actual question was answered. However, i did not ask the right question. So here's my actual solution. I am able to set the ENV VARS in the SSH session with the help of a pretty hacky solution, which should not be used in PROD environments, since it could lead to information disclosure.
Add all ENV VARS as build args.
docker-compose.yml:
ssh1:
build:
context: ./.project/docker/ssh1
dockerfile: Dockerfile
args:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: app1
MYSQL_USER: app1
MYSQL_PASSWORD: app1
MYSQL_HOST: mysql1
And write them to $HOME/.ssh/environment as well as enabling PermitUserEnvironment. Don't do this in production
Dockerfile
FROM ubuntu:20.04
ARG MYSQL_ROOT_PASSWORD
ARG MYSQL_DATABASE
ARG MYSQL_USER
ARG MYSQL_PASSWORD
ARG MYSQL_HOST
RUN echo "MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD" >> /home/app/.ssh/environment \
&& echo "MYSQL_DATABASE=$MYSQL_DATABASE" >> /home/app/.ssh/environment \
&& echo "MYSQL_USER=$MYSQL_USER" >> /home/app/.ssh/environment \
&& echo "MYSQL_PASSWORD=$MYSQL_PASSWORD" >> /home/app/.ssh/environment \
&& echo "MYSQL_HOST=$MYSQL_HOST" >> /home/app/.ssh/environment \
&& sed -i 's/#PermitUserEnvironment no/PermitUserEnvironment yes/g' /etc/ssh/sshd_config
Now, when you login, SSH will read the env vars from the users .ssh/environment (app in this case) and set them in the user's SSH session.
Environment variables are present in RUN commands and in the shell you exec into when issuing docker exec command, but when you ssh into an ssh server running inside container, you actually get a brand new shell which doesn’t have those env variables set.
Your issue has actually not much to do with docker but is due to the way sshd works: for every connection sshd will setup a new environment wiping out all variables in its own environment, see the Login Process in man sshd.
(It's easy to see why this makes sense: sshd is started by the root user and so may contain sensitive data in its environment variables that should not be leaked to the users. Other variables wouldn't be reasonable to pass to the user session, e.g. HOME, PATH, SHELL)
Depending on your use case there are various ways to pass environment variables to a new ssh session, depending on whether it being an interactive or non-interactive session, running a (login) shell or not:
~/.ssh/environment: variables injected by ssh, see PermitUserEnvironment
/etc/environment: used by pam_env on login
/etc/profile, ~/.bashrc and alike: configs used by the (login) shell, see bash for example.
Also depending on your use case you have now various options how to add these files to the container:
if the variables are static: just ADD the respective file to the image or create it in the Dockerfile
if the variables are set on build-time: use ARGs (or ENV) to pass the variables to the build and create the respective file from that in the build (as you did in your solution)
if the variables should be set on container run-time:
use a custom ENTRYPOINT script to generate the respective file on startup from the passed environment variables or command line arguments
volume-mount the respective file into the container (you may also use docker secret for sensitive data here)

tcpdump on docker container gives error "invalid number of output files"

I'm trying to run a tcpdump command in docker container but I always get this error: "tcpdump: invalid number of output files ="
I'm running docker in Ubuntu.
This is my container run:
sudo docker run --name pcap_log -it --network=host -e "log_int=any" -e "log_rot=30" -e "log_fil=120" --mount source=/home/docker/container_data/pcap_data, target=/pcap_captures, type=bind pcap_log:12345
This is my image:
FROM debian:10.3
RUN apt-get update \
&& apt-get install -y \
tcpdump
RUN mv /usr/sbin/tcpdump /usr/bin/tcpdump
RUN mkdir /pcap_captures
WORKDIR /pcap_captures
ARG log_int
ARG log_rot
ARG log_fil
ENV log_int = $log_int
ENV log_rot = $log_rot
ENV log_fil = $log_fil
CMD tcpdump -i $log_int -G $log_rot -w '%Y-%m-%d_%H:%M:%S.pcap' -W $log_fil -U
"tcpdump: invalid number of output files ="
As you can see, there is an = sign left somewhere that is not interpreted correctly.
The problem is in your ENV declaration with an extra space before and after the = sign. For example
ENV log_int = $log_int
is instructing docker that the environment var log_int should be set to the value = $log_ing. Either you completely drop the equal sign
ENV log_int $log_int
or you remove the spaces arround it
ENV log_int=$log_int
Both are totally equivalent.
I actually suggest your keep the second form, declare all your envs in a single action and secure your vars with quotes:
ENV log_int="$log_int" log_rot="$log_rot" log_fil="$log_fil"

Docker doesn't seem to be mapping ports

I'm working with Hugo
Trying to run inside a Docker container to allow people to easily manage content.
My first task is to get Hugo running and people able to view the site locally.
Here's my Dockerfile:
FROM alpine:3.3
RUN apk update && apk upgrade && \
apk add --no-cache go bash git openssh && \
mkdir -p /aws && \
apk -Uuv add groff less python py-pip && \
pip install awscli && \
apk --purge -v del py-pip && \
rm /var/cache/apk/* && \
mkdir -p /go/src /go/bin && chmod -R 777 /go
ENV GOPATH /go
ENV PATH /go/bin:$PATH
RUN go get -v github.com/spf13/hugo
RUN git clone http://mygitrepo.com /app
WORKDIR /app
EXPOSE 1313
ENTRYPOINT ["hugo","server"]
I'm checking out the site repo then running Hugo - hugo server
I'm then running this container via:
docker run -d -p 1313:1313 --name app app
Which reports everything is starting OK however when I try to browse locally on localhost:1313 I see nothing.
Any ideas where I'm going wrong?
UPDATE
docker ps gives me:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9e1f12849044 app "hugo server" 16 minutes ago Up 16 minutes 0.0.0.0:1313->1313/tcp app
And docker logs 9e1 gives me:
Started building sites ...
Built site for language en:
0 draft content
0 future content
0 expired content
25 pages created
0 non-page files copied
0 paginator pages created
0 tags created
0 categories created
total in 64 ms
Watching for changes in /ltec/{data,content,layouts,static,themes}
Serving pages from memory
Web Server is available at http://localhost:1313/ (bind address 127.0.0.1)
Press Ctrl+C to stop
I had the same problem, but following this tutorial http://ahmedalani.com/post/so-recursive-it-hurts/, says about to use the param --bind from hugo server command.
Adding that param mentioned, and the ip 0.0.0.0 we have --bind=0.0.0.0
It works to me, I think this is a natural behavior from every container taking a localhost for self scope, but if you bind with 0.0.0.0 takes a visible scope to the main host.
This is because Docker is actually running in a VM. You need to navigate to the docker-machine ip instead of localhost.
curl $(docker-machine ip):1313
Delete EXPOSE 1313 in your Dockerfile. Dockerfile reference.

Resources