In container built with quarkus, trying to optionally enable OIDC integration with keycloak on docker container start - docker

i would like to provide our container with an optional OIDC/keycloak integration, disabled by default but possible to enable when starting a container via env variables.
This is how the configuration looks like in application.properties at build time:
quarkus.oidc.enabled=false
# quarkus.oidc.auth-server-url=<auth-server-url>
# quarkus.oidc.client-id=<client-id>
# quarkus.oidc.credentials.secret=<secret>
Ideally, on container start, quarkus.oidc.enabled=true could be set along side the other three properties via container env variables.
However, quarkus won't allow this, as quarkus.oidc.enabled can only be set on build time apparently, but not overridden at runtime (https://quarkus.io/guides/security-openid-connect#configuring-the-application).
I have found a google group that picks up on this topic (https://groups.google.com/g/quarkus-dev/c/isGqZvY829g/m/BNerQvSRAQAJ), mentioning the use of quarkus.oidc.tenant-enabled=false instead, but i am not sure how to apply this strategy in my use case.
Can anyone help me out here on how to make this work without having to build two images (one with oidc enabled, and one without) ?

Related

How do you ensure Kubernetes Deployment file does not override secure settings in DOCKERFILE?

Let's assume you want to run a container under a rootless user context using Kubernetes and Docker runtime. Hence, you specify in the DOCKERFILE the USER directive to be a non-root user (e.g. uid 1000). However, this setting can be overwritten by the Deployment file using the runasuser flag.
If the above scenario is possible (correct me if I am wrong), the security team would potentially scan the DOCKERFILE and container image for vulnerabilities and find it to be safe. Only to be exposed to risk when deploying when a K8S Deployment file specifies runasuser: 0 - which they are not aware of.
What do you think is the best way to mitigate this risk? Obviously, we can place a gate for scanning Deployment files as the final check or just check for both artefacts, or deploy a PodSecurityPolicy checking for this - but was keen to hear whether there are more efficient ways especially in an Agile development space.

How to configure Graylog Plugin on bootstrap (non interactive)?

I setup a Graylog server based on the official Graylog 3 Docker image and added the SSO plugin. In principle it works but I have to configure the SSO headers using the UI after each container start.
I see the options to configure Graylog itself using either a server.conf file or environment variables. But I cannot find any way to configure the plugin upfront to get a final image for automatic deployment.
Is there any way to configure Graylog plugins using special config file entries, prefixed environment variables or separate config files?
If you create you're own shell script to update files/settings, you can create a new image based on the original (a new Dockerfile), which, when started, will run the script, modify any relevant settings and start the application-server. Even better if you can have the script take inputs which you can supply as environment variables to the docker container.

Docker: Building within corporate firewall, deploying outside?

I have a software suite (node web server, database, other tools) that I'm developing inside a corporate firewall, building into docker images, and deploying with docker-compose. In order to actually install all the software into the images, I need to set up the environment to use a network proxy, and also to disable strict SSL checking (because the firewall includes ssl inspection), not only in terms of environment variables but also for npm, apt and so on.
I've got all this working so that I can build within the firewall and deploy within the firewall, and have set up my Dockerfiles and build scripts so that enabling all the proxy/ssl config stuff is dependent on a docker --build-arg which sets an environment variable via ENV enable_proxies=$my_build_arg, so I can also just as easily skip all that configuration for building and deploying outside the firewall.
However, I need to be able to build everything inside the firewall, and deploy outside it. Which means that all the proxy stuff has to be enabled at build time (so the software packages can all be installed) if the relevant --build-arg is specified, and then also separately either enabled or disabled at runtime using --env enable_proxies=true or something similar.
I'm still relatively new to some aspects of Docker, but my understanding is that the only thing executed when the image is run is the contents of the CMD entry in the Dockerfile, and that CMD can only execute a single command.
Does anyone have any idea how I can/should go about separating the proxy/ssl settings during build and runtime like this?
You should be able to build and ship a single image; “build inside the firewall, deploy outside” is pretty normal.
One approach that can work for this is to use Docker’s multi-stage build functionality to have two stages. The first maybe has special proxy settings and gets the dependencies; the second is the actual runtime image.
FROM ... AS build
ARG my_build_arg
ENV enable_proxies=$my_build_arg
WORKDIR /artifacts
RUN curl http://internal.source.example.com/...
FROM ...
COPY --from=build /artifacts/ /artifacts/
...
CMD ["the_app"]
Since the second stage doesn’t have an ENV directive, it never will have $enable_proxies set, which is what you want for the actual runtime image.
Another similar approach is to write a script that runs on the host that downloads dependencies into a local build tree and then runs docker build. (This might be required if you need to support particularly old Dockers.) Then you could use whatever the host has set for $http_proxy and not worry about handling the proxy vs. non-proxy case specially.

Is it possible to specify a Docker image build argument at pod creation time in Kubernetes?

I have a Node.JS based application consisting of three services. One is a web application, and two are internal APIs. The web application needs to talk to the APIs to do its work, but I do not want to hard-code the IP address and ports of the other services into the codebase.
In my local environment I am using the nifty envify Node.JS module to fix this. Basically, I can pretend that I have access to environment variables while I'm writing the code, and then use the envify CLI tool to convert those variables to hard-coded strings in the final browserified file.
I would like to containerize this solution and deploy it to Kubernetes. This is where I run into issues...
I've defined a couple of ARG variables in my Docker image template. These get turned into environment variables via RUN export FOO=${FOO}, and after running npm run-script build I have the container I need. OK, so I can run:
docker build . -t residentmario/my_foo_app:latest --build-arg FOO=localhost:9000 BAR=localhost:3000
And then push that up to the registry with docker push.
My qualm with this approach is that I've only succeeded in punting having hard-coded variables to the container image. What I really want is to define the paths at pod initialization time. Is this possible?
Edit: Here are two solutions.
PostStart
Kubernetes comes with a lifecycle hook called PostStart. This is described briefly in "Container Lifecycle Hooks".
This hook fires as soon as the container reaches ContainerCreated status, e.g. the container is done being pulled and is fully initialized. You can then use the hook to jump into the container and run arbitrary commands.
In our case, I can create a PostStart event that, when triggered, rebuilds the application with the correct paths.
Unless you created a Docker image that doesn't actually run anything (which seems wrong to me, but let me know if this is considered an OK practice), this does require some duplicate work: stopping the application, rerunning the build process, and starting the application up again.
Command
Per the comment below, this event doesn't necessarily fire at the right time. Here's another way to do it that's guaranteed to work (and hence, superior).
A useful Docker container ends with some variant on a CMD serving the application. You can overwrite this run command in Kubernetes, as explained in the "Define a Command and Arguments for a Container" section of the documentation.
So I added a command to the pod definition that ran a shell script that (1) rebuilt the application using the correct paths, provided as an environment variable to the pod and (2) started serving the application:
command: ["/bin/sh"]
args: ["./scripts/build.sh"]
Worked like a charm.

Is it a normal practice to include more than one Dockerfile in a project?

If I have a web project in source control and I want to include all the necessary configuration to run it, should I be using more than one Dockerfile to define different profiles for my database, web and data containers?
Are there any examples of this practice?
No, You should either allow the profiles to be mounted in using volumes so that you can use the same container with different configurations or setup all the configurations and then allow the profile to be selected based on environment variable or commands to docker run.
The first approach is preferable as no information about your configuration bleeds into the container definition. You may even make your container public as it may be of use to other people.
If you are only changing configuration parameters I would recommend you to reuse the Dockerfile and pass a configuration file as a parameter.

Resources