Wildfly Swarm: Environment specific configuration of Keycloak Backend - devops

Given is a JavaEE application on wildfly that uses keycloak as authentication backend, configured in project-stages.yml:
swarm:
deployment:
my.app.war:
web:
login-config:
auth-method: KEYCLOAK
The application will be deployed in different environments using a Gitlab-CD-Pipeline. Therefore keycloak specifics must be configured per environment.
By now the only working configuration that I found is adding a keycloak.json like (the same file in every environment):
{
"realm": "helsinki",
"bearer-only": true,
"auth-server-url": "http://localhost:8180/auth",
"ssl-required": "external",
"resource": "backend"
}
According to the Wildfly-Swarm Documentation it should be possible to configure keycloak in project-stages.yml like:
swarm:
keycloak:
secure-deployments:
my-deployment:
realm: keycloakrealmname
bearer-only: true
ssl-required: external
resource: keycloakresource
auth-server-url: http://localhost:8180/auth
But when I deploy the application, no configuration is read:
2018-03-08 06:29:03,540 DEBUG [org.keycloak.adapters.undertow.KeycloakServletExtension] (ServerService Thread Pool -- 12) KeycloakServletException initialization
2018-03-08 06:29:03,540 DEBUG [org.keycloak.adapters.undertow.KeycloakServletExtension] (ServerService Thread Pool -- 12) using /WEB-INF/keycloak.json
2018-03-08 06:29:03,542 WARN [org.keycloak.adapters.undertow.KeycloakServletExtension] (ServerService Thread Pool -- 12) No adapter configuration. Keycloak is unconfigured and will deny all requests.
2018-03-08 06:29:03,545 DEBUG [org.keycloak.adapters.undertow.KeycloakServletExtension] (ServerService Thread Pool -- 12) Keycloak is using a per-deployment configuration.
If you take a look at the source of the above class, it looks like the only way to get around is to provide a KeycloakConfigResolver. Does Wildfly-Swarm provide a resolver that reads the project-stages.yml?
How can I configure environment-specific auth-server-urls?
A workaround would be to have different keycloak.json-Files, but I would rather use the project-stages.yml.

I have a small WildFly Swarm project which configures Keycloak exclusively via project-defaults.yml here: https://github.com/Ladicek/swarm-test-suite/tree/master/wildfly/keycloak
From the snippets you post, the only thing that looks wrong is this:
swarm:
keycloak:
secure-deployments:
my-deployment:
The my-deployment name needs to be the actual name of the deployment, same as what you have in
swarm:
deployment:
my.app.war:
If you already have that, then I'm afraid I'd have to start speculating: which WildFly Swarm version you use? Which Keycloak version?

Also you could specify the swarm.keycloak.json.path property in your yml:
swarm:
keycloak:
json:
path: path-to-keycloak-config-files-folder/keycloak-prod.json
and you can dynamically select a yml file config during startup of the application with the -Dswarm.project.stage option.
Further references:
cheat sheet: http://design.jboss.org/redhatdeveloper/marketing/wildfly_swarm_cheatsheet/cheat_sheet/images/wildfly_swarm_cheat_sheet_r1v1.pdf
using multiple swarm project stages (profiles) example: https://github.com/thorntail/thorntail/tree/master/testsuite/testsuite-project-stages/src/test/resources
https://docs.thorntail.io/2018.1.0/#_keycloak

Related

My containerized tomcat web application doesn't see configured log4j2.xml

My web application works fine with the created log4j2.xml file on an aws ec2 instance. But now I containerized it and it's running in ECS Fargate. I can see catalina logs in CloudWatch but not application specific logs that I configured in log4j2.xml file. log4j2.xml is located in a specific path like /var/webapp/conf and I've put the path in catalina.properties as shared.loader=/var/webapp/conf. Also, I see this ERROR in my catalina logs:
ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console. Set system property 'log4j2.debug' to show Log4j2 internal initialization logging.
Note: I don't want to change tomcat default logging. I'm just trying to send my application logs to the console as well, so I can see all the logs in one CloudWatch log stream.
Configuration for log4j logging driver is not being recognised by your Fargate Task. The reason being, with Fargate tasks we can only setup some specific logging drivers via the Task Definition.
Amazon ECS task definitions for Fargate support the awslogs, splunk, firelens, and fluentd log drivers for the log configuration.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate.html
I recommend to use CloudWatch log driver:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate.html

Elastic Beanstalk Environment for multi-container docker fails to be created due to LaunchWaitCondition

I'm gonna set up sample elasticbeanstalk environment for multi-container docker.
But it is not created due to error.
environment tier: web-server
other configuration info: https://i.stack.imgur.com/gKwBn.png
I want to create sample elasticbeanstalk environment for multi-container docker.
But the actual is not created.
Here is the error statement.
WARN Environment health has transitioned from Pending to Severe.
Initialization in progress (running for 15 minutes). None of the instances are sending data.
ERROR Stack named 'awseb-e-at4dw9xg2u-stack' aborted operation.
Current state: 'CREATE_FAILED' Reason: The following resource(s) failed to create: [AWSEBInstanceLaunchWaitCondition].
ERROR LaunchWaitCondition failed.
The expected number of EC2 instances were not initialized within the given time.
Rebuild the environment. If this persists, contact support.
This issue is solved by allowing inbound & outbound traffic for default ACL network.
inbound
outbound

Keycloak SPI Providers and layers not loading when using Docker

I'm trying to setup a docker image with some custom things, such as a logback extension, so I have some CLI scripts, like this one:
/subsystem=logging: remove()
/extension=org.jboss.as.logging: remove()
/extension=com.custom.logback: add()
/subsystem=com.custom.logback: add()
I also have CLI scripts to configure datasource pool, themes, add some SPI on keycloak-server subsystem, etc. I put these scripts in the /opt/jboss/startup-scripts directory. However, when I create the container things do not work well. The scripts are not loaded as expected and keycloak starts with an error, not loading providers such as password policies used by the realms.
When I'm using standalone Keycloak all SPI providers are loaded fine as the log below:
2019-07-25 18:27:07.906 WARN [org.keycloak.services] (ServerService Thread Pool -- 65) KC-SERVICES0047: custom-password-policy (com.custom.login.password.PasswordSecurityPolicyFactory) is implementing the internal SPI password-policy. This SPI is internal and may change without notice
2019-07-25 18:27:07.909 WARN [org.keycloak.services] (ServerService Thread Pool -- 65) KC-SERVICES0047: custom-event (com.custom.event.KeycloakServerEventListenerProviderFactory) is implementing the internal SPI eventsListener. This SPI is internal and may change without notice
2019-07-25 18:27:08.026 WARN [org.keycloak.services] (ServerService Thread Pool -- 65) KC-SERVICES0047: custom-mailer (com.custom.mail.MessageSenderProviderFactory) is implementing the internal SPI emailSender. This SPI is internal and may change without notice
2019-07-25 18:27:08.123 WARN [org.keycloak.services] (ServerService Thread Pool -- 65) KC-SERVICES0047: custom-user-domain-verification (com.custom.login.domain.UserDomainVerificationFactory) is implementing the internal SPI authenticator. This SPI is internal and may change without notice
2019-07-25 18:27:08.123 WARN [org.keycloak.services] (ServerService Thread Pool -- 65) KC-SERVICES0047: custom-recaptcha-username-password (com.custom.login.domain.RecaptchaAuthenticatorFactory) is implementing the internal SPI authenticator. This SPI is internal and may change without notice
If I use the same package with Docker, using jboss/keycloak:6.0.1 as the image base, providers do not load. I'm using it as modules, adding them to the $JBOSS_HOME/modules folder and configuring like the script below:
/subsystem=keycloak-server/: write-attribute(name=providers,value=[classpath:${jboss.home.dir}/providers/*,module:com.custom.custom-keycloak-server])
/subsystem=keycloak-server/theme=defaults/: write-attribute(name=welcomeTheme,value=custom)
/subsystem=keycloak-server/theme=defaults/: write-attribute(name=modules,value=[com.custom.custom-keycloak-server])
/subsystem=keycloak-server/spi=emailSender/: add(default-provider=custom-mailer)
When I execute the script inside the container all works fine.
I tried both using volumes to map the jar with the providers and copying the jar when building the custom image but none of these ways is working.
I'm using jboss:keycloak:6.0.1 docker image and Keycloak 6.0.1 standalone, layers and modules put in the same directories.
What am I doing wrong? What is the trick to using SPI provider with Docker or the image was not intended for production or this type of need?
OK, I've found why this happen
it comes from the opt/jboss/tools/docker-entrypoint.sh
#################
# Configuration #
#################
# If the server configuration parameter is not present, append the HA profile.
if echo "$#" | egrep -v -- '-c |-c=|--server-config |--server-config='; then
SYS_PROPS+=" -c=standalone-ha.xml"
fi
it will launch the keycloak as a clustered, as I think they considered the standalone as not safe for production
Standalone operating mode is only useful when you want to run one, and
only one Keycloak server instance. It is not usable for clustered
deployments and all caches are non-distributed and local-only. It is
not recommended that you use standalone mode in production as you will
have a single point of failure. If your standalone mode server goes
down, users will not be able to log in. This mode is really only
useful to test drive and play with the features of Keycloak
Blockquote
To keep the 'standalone mode', override the image to add the property -c standalone.xml as parameters:
CMD ["-b", "0.0.0.0", "-c", "standalone.xml"]
https://hub.docker.com/r/jboss/keycloak/:
To add a custom provider extend the Keycloak image and add the provider to the /opt/jboss/keycloak/standalone/deployments/ directory.
Did you use volume at /opt/jboss/keycloak/standalone/deployments/ for your custom providers?

Deployment of sample task fails in PCF

spring-cloud-dataflow-server-2.0.1.RELEASE.jar
I am trying to deploy the sample task app on SCDF#PCF.
Deployment fails with the following Exception :
Shell side :
No Launcher found for the platform named 'default'. Available platform names are []
org.springframework.cloud.dataflow.rest.client.DataFlowClientException: No Launcher found for the platform named 'default'. Available platform names are []
SCDF Server side :
2019-03-25T08:00:33.81-0500 [APP/PROC/WEB/0] OUT 2019-03-25 13:00:33.815 ERROR 19 --- [io-8080-exec-10] o.s.c.d.s.c.RestControllerAdvice : Caught exception while handling a request
2019-03-25T08:00:33.81-0500 [APP/PROC/WEB/0] OUT java.lang.IllegalStateException: No Launcher found for the platform named 'default'. Available platform names are []
2019-03-25T08:00:33.81-0500 [APP/PROC/WEB/0] OUT at org.springframework.cloud.dataflow.server.service.impl.DefaultTaskExecutionService.findTaskLauncher(DefaultTaskExecutionService.java:199)
2019-03-25T08:00:33.81-0500 [APP/PROC/WEB/0] OUT at org.springframework.cloud.dataflow.server.service.impl.DefaultTaskExecutionService.executeTask(DefaultTaskExecutionService.java:151)
2019-03-25T08:00:33.81-0500 [APP/PROC/WEB/0] OUT at org.springframework.cloud.dataflow.server.service.impl.DefaultTaskExecutionService$$FastClassBySpringCGLIB$$422cda43.invoke(<generated>)
Any ideas ? Do I need to set a launcher ?
It appears you may not have configured a platform for Tasks.
Starting from v2.0, SCDF provides the flexibility to configure multiple platform backends for Tasks, so you can choose from a list of platforms where you'd want to launch the Task. You can read more about the feature from the release highlights-blog.
If you haven't already configured the Task platform properties, please use the sample manifest.yml as a reference.
If you have set those properties and you still see this issue, feel free to share the manifest.yml - we can review for correctness. Of course, make sure to remove sensitive creds before sharing it.
Just as complementary information:
I got the same error by launch on a Kubernetes platform (Openshift) and could resolve the problem by adding the following snippet in the application.yaml from dataflow-server:
spring:
cloud:
dataflow:
task:
platform:
kubernetes:
accounts:
dev:
namespace: devNamespace
imagePullPolicy: Always
entryPointStyle: exec
limits:
cpu: 4
qa:
namespace: qaNamespace
imagePullPolicy: IfNotPresent
entryPointStyle: boot
limits:
memory: 2048m
Reference: Documentation Dataflow

Dataflow 1.2.0 YAML configuration changes

Yesterday I upgraded my development environment to Spring Cloud Dataflow 1.2.0 and all of my sink/source apps dependencies.
I have two main issues:
javaOpts: -Xmx128m is not longer being picked up, so locally deployed apps have the default Xmx value.
Here is the format of my previously working Dataflow yaml config.
See full here: https://pastebin.com/p1JmLnLJ
spring:
cloud:
dataflow:
applicationProperties:
stream:
spring:
cloud:
deployer:
local:
javaOpts: -Xmx128m
Kafka config options like ssl.truststore.location etc. are not being read correctly. Another stackoverflow post indicated these must be marked like this "[ssl.truststore.location]". Is there some documented working yaml config or list of breaking changes with 1.2.0? The file based authentication block was also moved, and I was able to figure that one out.
Yes, It looks like a bug in Spring Cloud Local Deployer to consider the common application properties passed via args. Created https://github.com/spring-cloud/spring-cloud-deployer-local/issues/48 to track this.

Resources