Keycloak SPI Providers and layers not loading when using Docker - docker

I'm trying to setup a docker image with some custom things, such as a logback extension, so I have some CLI scripts, like this one:
/subsystem=logging: remove()
/extension=org.jboss.as.logging: remove()
/extension=com.custom.logback: add()
/subsystem=com.custom.logback: add()
I also have CLI scripts to configure datasource pool, themes, add some SPI on keycloak-server subsystem, etc. I put these scripts in the /opt/jboss/startup-scripts directory. However, when I create the container things do not work well. The scripts are not loaded as expected and keycloak starts with an error, not loading providers such as password policies used by the realms.
When I'm using standalone Keycloak all SPI providers are loaded fine as the log below:
2019-07-25 18:27:07.906 WARN [org.keycloak.services] (ServerService Thread Pool -- 65) KC-SERVICES0047: custom-password-policy (com.custom.login.password.PasswordSecurityPolicyFactory) is implementing the internal SPI password-policy. This SPI is internal and may change without notice
2019-07-25 18:27:07.909 WARN [org.keycloak.services] (ServerService Thread Pool -- 65) KC-SERVICES0047: custom-event (com.custom.event.KeycloakServerEventListenerProviderFactory) is implementing the internal SPI eventsListener. This SPI is internal and may change without notice
2019-07-25 18:27:08.026 WARN [org.keycloak.services] (ServerService Thread Pool -- 65) KC-SERVICES0047: custom-mailer (com.custom.mail.MessageSenderProviderFactory) is implementing the internal SPI emailSender. This SPI is internal and may change without notice
2019-07-25 18:27:08.123 WARN [org.keycloak.services] (ServerService Thread Pool -- 65) KC-SERVICES0047: custom-user-domain-verification (com.custom.login.domain.UserDomainVerificationFactory) is implementing the internal SPI authenticator. This SPI is internal and may change without notice
2019-07-25 18:27:08.123 WARN [org.keycloak.services] (ServerService Thread Pool -- 65) KC-SERVICES0047: custom-recaptcha-username-password (com.custom.login.domain.RecaptchaAuthenticatorFactory) is implementing the internal SPI authenticator. This SPI is internal and may change without notice
If I use the same package with Docker, using jboss/keycloak:6.0.1 as the image base, providers do not load. I'm using it as modules, adding them to the $JBOSS_HOME/modules folder and configuring like the script below:
/subsystem=keycloak-server/: write-attribute(name=providers,value=[classpath:${jboss.home.dir}/providers/*,module:com.custom.custom-keycloak-server])
/subsystem=keycloak-server/theme=defaults/: write-attribute(name=welcomeTheme,value=custom)
/subsystem=keycloak-server/theme=defaults/: write-attribute(name=modules,value=[com.custom.custom-keycloak-server])
/subsystem=keycloak-server/spi=emailSender/: add(default-provider=custom-mailer)
When I execute the script inside the container all works fine.
I tried both using volumes to map the jar with the providers and copying the jar when building the custom image but none of these ways is working.
I'm using jboss:keycloak:6.0.1 docker image and Keycloak 6.0.1 standalone, layers and modules put in the same directories.
What am I doing wrong? What is the trick to using SPI provider with Docker or the image was not intended for production or this type of need?

OK, I've found why this happen
it comes from the opt/jboss/tools/docker-entrypoint.sh
#################
# Configuration #
#################
# If the server configuration parameter is not present, append the HA profile.
if echo "$#" | egrep -v -- '-c |-c=|--server-config |--server-config='; then
SYS_PROPS+=" -c=standalone-ha.xml"
fi
it will launch the keycloak as a clustered, as I think they considered the standalone as not safe for production
Standalone operating mode is only useful when you want to run one, and
only one Keycloak server instance. It is not usable for clustered
deployments and all caches are non-distributed and local-only. It is
not recommended that you use standalone mode in production as you will
have a single point of failure. If your standalone mode server goes
down, users will not be able to log in. This mode is really only
useful to test drive and play with the features of Keycloak
Blockquote
To keep the 'standalone mode', override the image to add the property -c standalone.xml as parameters:
CMD ["-b", "0.0.0.0", "-c", "standalone.xml"]

https://hub.docker.com/r/jboss/keycloak/:
To add a custom provider extend the Keycloak image and add the provider to the /opt/jboss/keycloak/standalone/deployments/ directory.
Did you use volume at /opt/jboss/keycloak/standalone/deployments/ for your custom providers?

Related

Cannot enable basic auth on Windows-Exporter to secure node between Windows and Prometheus

As a test environment to monitor status of Windows Servers (CPU, Disk usage, Memory, network etc) I have placed two testing nodes with Windows-Exporter configured on custom port :15000
Next, I have created proper jobs for each separate Windows instances and created dashboard in Grafana.
The problem is that I'm looking for securing nodes so only Prometheus server can access node output and all other computers in same network get deny access to node website.
I have tried to install Windows Node with setting:
msiexec /i windows_exporter-0.19.0-amd64.msi LISTEN_PORT="15000" EXTRA_FLAGS="--web.config.file=C:\Configuration\web.yml"
As well as with different configurations of " and ' in commandline for EXTRA_FLAGS parameter - yet it seems they are being ignored. The only parameter working fine is change of listen port.
I have followed instructions provided at https://prometheus.io/docs/guides/basic-auth/ to set up basic auth.
Web.yml looks like this:
basic_auth:
username: 'scrapper'
password: '$2a$14$AWpxyT1KcRPSE07IfmqTqOZznpMfGwxHP8uPVQV8G0qdjggND3hgC'
However, after installation with msiexec - entry in Windows services for windows_exporter is without web.config.file entry:
"C:\Program Files\windows_exporter\windows_exporter.exe" --log.format logger:eventlog?name=windows_exporter --telemetry.addr 0.0.0.0:15000
I have tried to edit service entry with sc command but it broke node completely, making me rolling back to unprotected access to node.
Does basic auth work on windows-exporter same way as on node-exporter for Linux OSes?
Or is there other possible way to secure access to node exposed data without need to install IIS?
I have never worked with node exporter in Windows, but in Linux, your Web.yml configuration file should be as follows:
basic_auth_users:
<string>: <secret>
like this:
basic_auth_users:
scrapper: $2a$14$AWpxyT1KcRPSE07IfmqTqOZznpMfGwxHP8uPVQV8G0qdjggND3hgC

Azure Cloud Service microservice to K8 Migration

I am in the process of evaluating moving a very large Azure Cloud Service (Web Role) microservice architecture to AKS and have been working through the necessary code and build changes to support it.
In order to replicate the production environment locally for the developers, we run nginx on the host with SSL offloading and DNS (hosted in Azure) A records pointing to 127.0.0.1. When running in the Azure Emulator, the net affect is the ability for both the developer to visit the various web front ends in their browser (i.e. https://myapp.mydomain.dev) as well as hit the various API's in the solution (Web API 2) in Postman/cURL, etc.
Additionally due to how the networking of the Azure Emulator works, the apps themselves can resolve each other through nginx on the host (i.e. MVC app at https://myapp.mydomain.dev can obtain a token from the IdP web API at https://identity.mydomain.dev and then use that token at the API at https://api.mydomain.dev). This is the critical piece and the source of my question.
All attempts at getting the containers themselves to resolve each other the same way the host OS can (browser/Postman, SSL offloading via nginx) have failed. Many of the instructions out there are understandably for linux containers but having adapted the various networking docker-compose settings for the windows container equivalent have not yet yielded an success. In order to keep the development environments aligned with the real work systems, which are tenantized and make sure of the default mapping in nginx to catch all incoming traffic and route it to a specific user facing app/container, it is not as simple as determining a "static" method of addressing these on startup and why the effort was put in to produce the development environments we have today.
Right now when one service (container) attempts to communication with another, it ultimately results in a resolution error as all requests resolve to https://127.0.0.1 due to the DNS A records hosted in Azure for the domain. Since this migration will be a longer term project, the environments need to co-exist so changing the way that DNS is resolved (real DNS A records pointing to 127.0.0.1), host running nginx and handling SSL offloading to the various webroles normally running in the Azure Emulator is not an option.
Is there a way (with Windows containers) to either:
Allow the container to utilize nginx on the host OS transparently (app must still call the API at https://api.mydomain.dev), which will cause the traffic to be routed properly to the correct container/port defined in the docker-compose file?
OR
Run nginx on each container, allowing each container to then resolve and route appropriately without knowing the IP of the other container, possibly through an alias which could be added to the containers nginx.conf before the service starts?
The platform utilizes OAuth2/OIDC and it is critical to maintain the full URL to the other services from the applications perspective. Beyond mirroring production and sandbox environments, this URL's are utilized for redirect URL and post logout redirect URL validation among other things so using "https://myContainerNameForOtherContainerAlias" is not a workable solution.
Will I have the same problem when setting up the AKS environment as well?

Duplicated port of child tasks in Spring Cloud Data Flow

When I launch new task (Spring Batch Job) using Spring Cloud Data Flow, I see that SCDF auto initialize Tomcat with some "random" ports but I do not know if there ports are created randomly or following any rule of the framework?
Therefore, I sometime have a trouble that "Web server failed to start. Port 123456 was already in use".
In conclusion, my questions are:
1) How does the framework choose ports for initializing? (randomly or by principle)?
2) Are there anyway to launch task effectively without duplicated ports(fixed configuration or method for choosing unused port at particular time)?
I don't think SCDF has anything to do with the port assignment etc.,
It is your task application that gets launched. You need to decide whether you really need the web dependency that brings in the tomcat to your application.
Assuming you use Spring Boot, you can either exclude the web starter dependency in your dependencies or pass the command line arg server.port=<?> to a specific port when launching the task (if you really need this task app to be a web app).

Connecting to scality/s3 server between docker containers

We are using a python based solution which shall load and store files from S3. For developing and local testing we are using a vagrant environment with docker and docker-compose. We have two docker-compose defintions - one for the assisting backend services (mongo, restheart, redis and s3) and the other one containing the python based REST API exposing solution using the backend services.
When our "front-end" docker-compose group interacts with restheart this works fine (using the name of the restheart container as server host in http calls). When we are doing the same with scality/s3 server this does not work.
The interesting part is, that we have created a test suite for using the scality/s3 server from a python test suite running on the host (windows10) over the forwarded ports through vagrant to the docker container of scality/s3 server within the docker-compose group. We used the endpoint_url localhost and it works perfect.
In the error case (when frontend web service wants to write to S3) the "frontend" service always responds with:
botocore.exceptions.ClientError: An error occurred (InvalidURI) when calling the CreateBucket operation: Could not parse the specified URI. Check your restEndpoints configuration.
And the s3server always responds with http 400 and the message:
s3server | {"name":"S3","clientIP":"::ffff:172.20.0.7","clientPort":49404,"httpMethod":"PUT","httpURL":"/raw-data","time":1521306054614,"req_id":"e385aae3c04d99fc824d","level":"info","message":"received request","hostname":"cdc8a2f93d2f","pid":83}
s3server | {"name":"S3","bytesSent":233,"clientIP":"::ffff:172.20.0.7","clientPort":49404,"httpMethod":"PUT","httpURL":"/raw-data","httpCode":400,"time":1521306054639,"req_id":"e385aae3c04d99fc824d","elapsed_ms":25.907569,"level":"info","message":"responded with error XML","hostname":"cdc8a2f93d2f","pid":83}
We are calling the scality with this boto3 code:
s3 = boto3.resource('s3',
aws_access_key_id='accessKey1',
aws_secret_access_key='verySecretKey1',
endpoint_url='http://s3server:8000')
s3_client = boto3.client('s3',
aws_access_key_id='accessKey1',
aws_secret_access_key='verySecretKey1',
endpoint_url='http://s3server:8000')
s3.create_bucket(Bucket='raw-data') # here the exception comes
bucket = self.s3.Bucket('raw-data')
This issue is quite common. In your config.json file, which you mount in your Docker container, I assume, there is a restEndpoints section, where you must associate a domain name with a default region. What that means is your frontend domain name should be specified in there, matching a default region.
Do note that that default region does not prevent you from using other regions: it's just where your buckets will be created if you don't specify otherwise.
In the future, I'd recommend you open an issue directly on the Zenko Forum, as this is where most of the community and core developpers are.
Cheers,
Laure

Wildfly Swarm: Environment specific configuration of Keycloak Backend

Given is a JavaEE application on wildfly that uses keycloak as authentication backend, configured in project-stages.yml:
swarm:
deployment:
my.app.war:
web:
login-config:
auth-method: KEYCLOAK
The application will be deployed in different environments using a Gitlab-CD-Pipeline. Therefore keycloak specifics must be configured per environment.
By now the only working configuration that I found is adding a keycloak.json like (the same file in every environment):
{
"realm": "helsinki",
"bearer-only": true,
"auth-server-url": "http://localhost:8180/auth",
"ssl-required": "external",
"resource": "backend"
}
According to the Wildfly-Swarm Documentation it should be possible to configure keycloak in project-stages.yml like:
swarm:
keycloak:
secure-deployments:
my-deployment:
realm: keycloakrealmname
bearer-only: true
ssl-required: external
resource: keycloakresource
auth-server-url: http://localhost:8180/auth
But when I deploy the application, no configuration is read:
2018-03-08 06:29:03,540 DEBUG [org.keycloak.adapters.undertow.KeycloakServletExtension] (ServerService Thread Pool -- 12) KeycloakServletException initialization
2018-03-08 06:29:03,540 DEBUG [org.keycloak.adapters.undertow.KeycloakServletExtension] (ServerService Thread Pool -- 12) using /WEB-INF/keycloak.json
2018-03-08 06:29:03,542 WARN [org.keycloak.adapters.undertow.KeycloakServletExtension] (ServerService Thread Pool -- 12) No adapter configuration. Keycloak is unconfigured and will deny all requests.
2018-03-08 06:29:03,545 DEBUG [org.keycloak.adapters.undertow.KeycloakServletExtension] (ServerService Thread Pool -- 12) Keycloak is using a per-deployment configuration.
If you take a look at the source of the above class, it looks like the only way to get around is to provide a KeycloakConfigResolver. Does Wildfly-Swarm provide a resolver that reads the project-stages.yml?
How can I configure environment-specific auth-server-urls?
A workaround would be to have different keycloak.json-Files, but I would rather use the project-stages.yml.
I have a small WildFly Swarm project which configures Keycloak exclusively via project-defaults.yml here: https://github.com/Ladicek/swarm-test-suite/tree/master/wildfly/keycloak
From the snippets you post, the only thing that looks wrong is this:
swarm:
keycloak:
secure-deployments:
my-deployment:
The my-deployment name needs to be the actual name of the deployment, same as what you have in
swarm:
deployment:
my.app.war:
If you already have that, then I'm afraid I'd have to start speculating: which WildFly Swarm version you use? Which Keycloak version?
Also you could specify the swarm.keycloak.json.path property in your yml:
swarm:
keycloak:
json:
path: path-to-keycloak-config-files-folder/keycloak-prod.json
and you can dynamically select a yml file config during startup of the application with the -Dswarm.project.stage option.
Further references:
cheat sheet: http://design.jboss.org/redhatdeveloper/marketing/wildfly_swarm_cheatsheet/cheat_sheet/images/wildfly_swarm_cheat_sheet_r1v1.pdf
using multiple swarm project stages (profiles) example: https://github.com/thorntail/thorntail/tree/master/testsuite/testsuite-project-stages/src/test/resources
https://docs.thorntail.io/2018.1.0/#_keycloak

Resources