I can run Keycloak with the following command
./bin/kc.sh start-dev \
--https-certificate-file=/etc/letsencrypt/live/$HOSTNAME/cert.pem \
--https-certificate-key-file=/etc/letsencrypt/live/$HOSTNAME/privkey.pem \
--hostname=$HOSTNAME
Works as expected
On the same computer, I try to run using Docker
docker run -p 80:8080 -p 443:8443 \
-v /etc/letsencrypt:/etc/letsencrypt:ro \
-e KEYCLOAK_ADMIN=admin \
-e KEYCLOAK_ADMIN_PASSWORD=change_me \
-e JAVA_OPTS_APPEND="$JAVA_OPTS_APPEND" \
quay.io/keycloak/keycloak:latest \
start-dev \
--https-certificate-file=/ect/letsencrypt/live/$HOSTNAME/cert.pem \
--https-certificate-key-file=/ect/letsencrypt/live/$HOSTNAME/privkey.pem \
--hostname=$HOSTNAME
It fails
2022-12-23 23:11:59,784 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Failed to start server in (development) mode
2022-12-23 23:11:59,785 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: /ect/letsencrypt/live/keycloak.fhir-poc.hcs.us.com/cert.pem
2022-12-23 23:11:59,787 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) Key material not provided to setup HTTPS. Please configure your keys/certificates.
Any suggestions besides a reverse proxy?
The problem is based on the directory linked structure of letsencrypt in linux and the permissions to access these files.
Letsencrypt linked directory structure works like:
/etc/letsencrypt/live/<your-domain/.pem -> /etc/letsencrypt/archive/<your-domain/.pem
The problem is the link from the live to the archive folder/file.
The permissions are mostly not correct.
A quick-fix is create a cert-mirror and copy the related files from /etc/letsencrypt/live/<your-domain/*.pem
to a new cert folder like /opt/cert
change permissions in /opt/cert to 777: chmod 777 -R /opt/certs
create a cron.monthly job in /etc/cron.monthly which copy the files to /opt/certs + change permissions correctly every month that your certs mirror always up-to-date
This will make your example working. Please keep in mind that permissions like 777 are let everyone access this file. You should use the correct permissions in productive environment.
I discovered the answer
letsencrypt certificates in the "live" folder are symlinks to the "archive" folder and I needed a custom docker image for keycloak to mount my certificates. So I followed the keycloak docs for creating a custom docker image and started a container with that image
Following
https://www.keycloak.org/server/containers
https://eff-certbot.readthedocs.io/en/stable/using.html#where-are-my-certificates
to build a custom image and change the cert permissions
Dockerfile
FROM quay.io/keycloak/keycloak:latest as builder
ENV KEYCLOAK_ADMIN=root
ENV KEYCLOAK_ADMIN_PASSWORD=change_me
WORKDIR /opt/keycloak
FROM quay.io/keycloak/keycloak:latest
COPY --from=builder /opt/keycloak/ /opt/keycloak/
COPY kc-export.json /opt/keycloak/kc-export.json
RUN /opt/keycloak/bin/kc.sh import --file /opt/keycloak/kc-export.json
VOLUME [ "/opt/keycloak/certs" ]
ENTRYPOINT ["/opt/keycloak/bin/kc.sh"]
Then start the container
docker run -p 8443:8443 \
-v /etc/letsencrypt:/etc/letsencrypt:ro \
-e KEYCLOAK_ADMIN=admin \
-e KEYCLOAK_ADMIN_PASSWORD=change_me \
-e JAVA_OPTS_APPEND="$JAVA_OPTS_APPEND" \
my-keycloak-image:latest \
start-dev \
--https-certificate-file=/opt/keycloak/certs/live/$HOSTNAME/cert.pem \
--https-certificate-key-file=/opt/keycloak/certs/live/$HOSTNAME/privkey.pem \
--hostname=$HOSTNAME
Related
I need to import client .pxf certificate inside selenoid container. For chrome I use following Dockerfile:
ARG IMAGE_VERSION
FROM selenoid/chrome:${IMAGE_VERSION}
ARG USER_ID
ARG USER_PASSWORD
COPY policy.json /etc/opt/chrome/policies/managed/policy.json
COPY ${USER_ID}.pfx .
RUN mkdir -p $HOME/.pki/nssdb \
&& certutil -N -d $HOME/.pki/nssdb --empty-password \
&& pk12util -d sql:$HOME/.pki/nssdb -i ${USER_ID}.pfx -W ${USER_PASSWORD}
However the same approach doesn't work for microsoft edge and firefox. Any ideas what am I doing wrong?
I've the following Dockerfile:
FROM php:7.4 as base
RUN apt-get install -y libzip-dev && \
docker-php-ext-install zip
# install some other things...
FROM base as intermediate
COPY upgrade.sh /usr/local/bin
FROM base as final
COPY start-app.sh /usr/local/bin
As you can see, I've 3 stages:
base
intermedia
final
At first, I'm building the base container and then both "derived" containers. My problem is that I need to push the intermediate and the final container to my (gitlab) registry. The containers are built using the following gitlab-ci.yml:
.build_container: &build_container
stage: build
image:
name: ${CI_DEPENDENCY_PROXY_GROUP_IMAGE_PREFIX}/moby/buildkit:rootless
entrypoint: ["sh", "-c"]
variables:
BUILDKITD_FLAGS: --oci-worker-no-process-sandbox
script:
- mkdir ~/.docker
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"auth\":\"$(echo -n ${CI_REGISTRY_USER}:${CI_REGISTRY_PASSWORD} | base64)\"}}}" > ~/.docker/config.json
- |
buildctl-daemonless.sh build \
--frontend dockerfile.v0 \
--local context=${CI_PROJECT_DIR} \
--local dockerfile=${CI_PROJECT_DIR} \
--opt filename=./${DOCKERFILE} \
--import-cache type=registry,ref=${CI_REGISTRY_IMAGE}:${CI_COMMIT_REF_SLUG} \
--export-cache type=inline \
--output type=image,name=${CI_REGISTRY_IMAGE}:${CI_COMMIT_REF_SLUG},push=true
Build:Container:
<<: *build_container
variables:
DOCKERFILE: "Dockerfile"
Ok, I can simply add another "buildctl-daemonless.sh"-command and use a separate file, but I want to make sure that both containers (intermediate and final) are build successfully before pushing them. So I'm looking for a solution that builds the intermediate and the final containers at first and then pushing both to the registry, e.g. something like this:
buildctl-daemonless.sh build \
--frontend dockerfile.v0 \
--local context=${CI_PROJECT_DIR} \
--local dockerfile=${CI_PROJECT_DIR} \
--opt filename=./${DOCKERFILE} \
--import-cache type=registry,ref=${CI_REGISTRY_IMAGE}:${CI_COMMIT_REF_SLUG} \
--export-cache type=inline
buildctl-daemonless.sh push intermediate final
Unfortunately there's no "push" command in buildctl-daemonless.sh as far as I know. So es anyone having an idea how I can to this?
Directly from buildkit, I don't think there's a separate push command. The output of a multi-platform image is usually directly to a registry, but could also be an OCI Layout tar file. To write to that tar file, you would add the following option:
--output type=oci,dest=path/to/output.tar
With that tar file, there are various standalone tools that can help. Pretty sure RedHat's skopeo and Google's go-containerregistry/crane both have options. My own tool is regclient which includes the following regctl command:
regctl image import ${image_name_tag} path/to/output.tar
There's a regclient/regctl:alpine image that's made to be embedded in CI pipelines like GitLab, and it will read registry auths from the same ~/.docker/config.json.
Mine is a bit of a peculiar situation, I created a dockerfile that "works" if not for some proiblems,
Here is a "working" version:
ARG IMGVERS=latest
FROM bensuperpc/tinycore:${IMGVERS}
LABEL maintainer "Vinnie Costante <****#gmail.com>"
ARG DOWNDIR=/tmp/download
ARG INSTDIR=/opt/vscodium
ARG REPOAPI="https://api.github.com/repos/VSCodium/vscodium/releases/latest"
ENV LANG=C.UTF-8 LC_ALL=C PATH="${PATH}:${INSTDIR}/bin/"
RUN tce-load -wic Xlibs nss gtk3 libasound libcups python3.9 tk8.6 \
&& rm -rf /tmp/tce/optional/*
RUN sudo ln -s /lib /lib64 \
&& sudo ln -s /usr/local/etc/fonts /etc/fonts \
&& sudo mkdir -p ${DOWNDIR} ${INSTDIR} \
&& sudo chown -R tc:staff ${DOWNDIR} ${INSTDIR}
#COPY VSCodium-linux-x64-1.57.1.tar.gz ${DOWNDIR}/
RUN wget http://192.168.43.6:8000/VSCodium-linux-x64-1.57.1.tar.gz -P ${DOWNDIR}
RUN tar xvf ${DOWNDIR}/VSCodium*.gz -C ${INSTDIR} \
&& rm -rf ${DOWNDIR}
CMD ["codium"]
The issues are these:
Starting the image with this command vscodium does not start, but entering the shell (adding /bin/ash to the end of the docker run) and then running codium instead vscodium starts. I tried many ways, even changing the entrypoint, the result is always the same. But if I try to add any other graphic program (like firefox) and replace the argument of the CMD instruction inside the dockerfile, everything works as it should.
docker run -it --rm \
--net=host \
--env="DISPLAY=unix${DISPLAY}" \
--workdir /home/tc \
--volume="$HOME/.Xauthority:/root/.Xauthority:rw" \
--name tc \
tinycodium
the last two versions of codium (1.58.0 and 1.58.1) don't work at all on docker but they start normally on the same distro not containerized. I tried installing other dependencies but nothing worked. Right now I don't know how to understand what's wrong with these two new versions.
I don't know how to set a volume to save codium data, I tried something like this --volume=/home/vinnie/docker:/home/tc but there are always problems with user/group permissions. I've also tried booting the container as user by adding it to the docker group but there's always a mess with permissions. If someone could explain me how to proceed, the directories I want to save are these:
/home/tc/.vscode-oss
/home/tc/.cache/mesa_shader_cache
/home/tc/.config/VSCodium
/home/tc/.config/glib-2.0/settings
/home/tc/.local/share
Try running codium --verbose and see if the container starts
I am running Jmeter in noVNC, able to run Jmeter in noVNC but offcourse in default small window.
But when I create Http(s) script recorder and when click on Start button, I get this error
error is -> "Could not create script recorder -see log for details: >> keytool error: java.security.ProviderException: Could not initialize NSS << command failed code:1
'keytool -genkeypair -alias:root_ca: -dname"CN=_Jmeter Root CA for recording(INSTALL ONLY IF IT IS YOURS).......FULL ERROR in SCREENSHOT"'"
Tried creating Http(s) script recrorder with and without PRoxy setup in my Chrome browser, getting same error.
right hand side of screenshot
below is my Dockerfile
FROM uphy/novnc-alpine
RUN \
apk add --no-cache curl openjdk8-jre bash \
&& apk add --no-cache nss \
&& curl -L https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-5.4.1.tgz > /tmp/jmeter.tgz \
&& mkdir -p /opt \
&& tar -xvf /tmp/jmeter.tgz -C /opt \
&& rm /tmp/jmeter.tgz \
&& cd /etc/supervisor/conf.d \
&& echo '[program:jmeter]' >> supervisord.conf \
&& echo 'command=/opt/apache-jmeter-5.4.1/bin/./jmeter' >> supervisord.conf \
&& echo 'autorestart=true' >> supervisord.conf
ENV JAVA_HOME /usr/lib/jvm/java-1.8-openjdk/
RUN export JAVA_HOME
This is how I am running (related to Use Jmeter desktop application as web app)
creating docker image with noVNC and running Jmeter inside noVNC (dockerfile also provided in the end)
exposing it to some port and accessing it in browser
docker build -t jmeter .
docker run -it --rm -p 8080:8080 jmeter
I checked my docker container also, able to see JDK, jdk is already present here -> /usr/lib/jvm/java-1.8-openjdk/ and jmeter is present here /opt/apache-jmeter-5.4.1
I am not sure should I pass more options or arguments inside docker run command.
I am wondering, how this jmeter will create the certificate inside my bin directory on click of start button, since this Jmeter is running inside noVNC docker ?
Any other way by which we can automatically integrate/create this certificate without importing or without clicking on start button.
How Proxy setting can be done if Jmeter in running inside noVNC container.
I think you need to install nss package
change this line:
apk add --no-cache curl openjdk8-jre bash \
to this one:
apk add --no-cache curl openjdk8-jre bash nss \
Once you re-build the image the HTTP(S) Test Script Recorder should launch normally.
With regards to the certificate, it will be stored in JMeter's "bin" folder in the container so if you want to use in in the browser in the container - you will have to install the browser there as well.
If you want to use the browser on your local machine - you will need to copy the certificate from the container and to expose another port for JMeter's HTTP(S) test script recorder.
Just in case be aware that you can also record JMeter test scripts using JMeter Chrome Extension, in this case you won't have to worry about proxies, certificates and ports.
I am new to docker, and am attempting to build an image that involves performing an npm install. Some of our the dependencies are coming from private repos we have, and I am hitting an SSH related issue:
I realised I was not supplying any form of SSH details to my file, and came across various posts online about how to do this using args into the docker build command.
So taken from here, I have added the following to my dockerfile before the npm install command gets run:
ARG ssh_prv_key
ARG ssh_pub_key
RUN apt-get update && \
apt-get install -y \
git \
openssh-server \
libmysqlclient-dev
# Authorize SSH Host
RUN mkdir -p /root/.ssh && \
chmod 0700 /root/.ssh && \
ssh-keyscan github.com > /root/.ssh/known_hosts
# Add the keys and set permissions
RUN echo "$ssh_prv_key" > /root/.ssh/id_rsa && \
echo "$ssh_pub_key" > /root/.ssh/id_rsa.pub && \
chmod 600 /root/.ssh/id_rsa && \
chmod 600 /root/.ssh/id_rsa.pub
So running the docker build command again with the correct args supplied, I do see further activity in the console that suggests my SSH key is being utilised:
But as you can see I am getting no hostkey alg messages and
I still getting the same 'Host key verification failed' error. I was wondering if I could view the log file it references in the error:
Do I need to get the image running in order to be able to connect to it and browse the 'root' folder?
I hope I have made sense, please be gentle I am a docker noob!
Thanks
The lines that start with —-> in the docker build output are valid Docker image IDs. You can pick any of these and docker run them:
docker run --rm -it 59c45dac474a sh
If a step is actually failing, one useful debugging trick is to launch the image built in the step before it and run the command by hand.
Remember that anyone who has your image can do this; the way you’ve built it, if you ever push your image to any repository, your ssh private key is there for the taking, and you should probably consider it compromised. That’s doubly true since it will also be there in plain text in docker history output.