Use environment variables in wildlfy datasource definition (file) - docker

I want to repackage my WAR application as self containing docker-image - currently still deploying as war to wildfly 19.
Since I don´t want to have the database password and/or URL be part of the docker image I want it to be configurable from outside - as environment variable.
So my current docker image includes a wildfly datasource definition as -ds.xml file with env placeholders since according to
https://blog.imixs.org/2017/03/17/use-environment-variables-wildfly-docker-container/
and other sources this should be possible.
My DS file is
<datasources xmlns="http://www.jboss.org/ironjacamar/schema">
<datasource jndi-name="java:jboss/datasources/dbtDS" pool-name="benchmarkDS">
<driver>dbt-datasource.ear_com.mysql.jdbc.Driver_5_1</driver>
<connection-url>${DB_CONNECTION_URL,env.DB_CONNECTION_URL}</connection-url>
<security>
<user-name>${DB_USERNAME,env.DB_USERNAME}</user-name>
<password>${DB_PASSWORD,env.DB_PASSWORD}</password>
</security>
<pool>[...]</pool>
</datasource>
</datasources>
But starting the docker container leads always to not recognizing the environment variables:
11:00:38,790 WARN [org.jboss.jca.core.connectionmanager.pool.strategy.OnePool] (JCA PoolFiller) IJ000610: Unable to fill pool: java:jboss/datasources/dbtDS: javax.resource.ResourceException: IJ031084: Unable to create connection
at org.jboss.ironjacamar.jdbcadapters#1.4.22.Final//org.jboss.jca.adapters.jdbc.local.LocalManagedConnectionFactory.createLocalManagedConnection(LocalManagedConnectionFactory.java:345)
[...]
Caused by: javax.resource.ResourceException: IJ031083: Wrong driver class [com.mysql.jdbc.Driver] for this connection URL []
at org.jboss.ironjacamar.jdbcadapters#1.4.22.Final//org.jboss.jca.adapters.jdbc.local.LocalManagedConnectionFactory.createLocalManagedConnection(LocalManagedConnectionFactory.java:323)
last line says, that DS_CONNECTION_URL seems to be empty - tried several combinations - believe me.
Wrong driver class [com.mysql.jdbc.Driver] for this connection URL []
I´m starting my container with
docker run --name="dbt" --rm -it -p 8080:8080 -p 9990:9990 -e DB_CONNECTION_URL="jdbc:mysql://127.0.0.1:13306/dbt?serverTimezone=UTC" -e DB_USERNAME="dbt" -e DB_PASSWORD="_dbt" dbt
I even modified the standalone.sh to output environments and DB_CONNECTION_URL IS there.
=========================================================================
JBoss Bootstrap Environment
JBOSS_HOME: /opt/jboss/wildfly
JAVA: /usr/lib/jvm/java/bin/java
DB_CONNECTION_URL: jdbc:mysql://127.0.0.1:13306/dbt?serverTimezone=UTC JAVA_OPTS: -server -Xms64m -Xmx512m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true --add-exports=java.base/sun.nio.ch=ALL-UNNAMED --add-exports=jdk.unsupported/sun.misc=ALL-UNNAMED --add-exports=jdk.unsupported/sun.reflect=ALL-UNNAMED
=========================================================================
11:00:34,362 INFO [org.jboss.modules] (main) JBoss Modules version 1.10.1.Final
11:00:34,854 INFO [org.jboss.msc] (main) JBoss MSC version 1.4.11.Final
11:00:34,863 INFO [org.jboss.threads] (main) JBoss Threads version 2.3.3.Final
[...]
So what am I doing wrong to enable wildfly to replace placeholders in my DS file??
They seem to be processed - since they evaluate to empty. But they should contain something...
Any suggestions appreciated.
Current Dockerfile
[...] building step above [...]
FROM jboss/wildfly:20.0.1.Final
USER root
RUN yum -y install zip wget && yum clean all
RUN sed -i 's/echo " JAVA_OPTS/echo " DB_CONNECTION_URL: $DB_CONNECTION_URL JAVA_OPTS/g' /opt/jboss/wildfly/bin/standalone.sh && \
cat /opt/jboss/wildfly/bin/standalone.sh
RUN sed -i 's/<spec-descriptor-property-replacement>false<\/spec-descriptor-property-replacement>/<spec-descriptor-property-replacement>true<\/spec-descriptor-property-replacement><jboss-descriptor-property-replacement>true<\/jboss-descriptor-property-replacement><annotation-property-replacement>true<\/annotation-property-replacement>/g' /opt/jboss/wildfly/standalone/configuration/standalone.xml
USER jboss
COPY --from=0 /_build/dbt-datasource.ear /opt/jboss/wildfly/standalone/deployments/
ADD target/dbt.war /opt/jboss/wildfly/standalone/deployments/

Answere to myself - perhaps good to know for others later:
Placeholder in -ds.xml files are NOT supported(!).
I added the same datasource definition in the standalone.xml by patching with sed and now it works without further modification more or less out of the box.

Related

Apache Nifi (on docker): only one of the HTTP and HTTPS connectors can be configured at one time error

Have a problem adding authentication due to a new needs while using Apache NiFi (NiFi) without SSL processing it in a container.
The image version is apache/nifi:1.13.0
It's said that SSL is unconditionally required to add authentication. It's recommended to use tls-toolkit in the NiFi image to add SSL. Worked on the following process:
Except for environment variable nifi.web.http.port for HTTP communication, and executed up the standalone mode container with nifi.web.https.port=9443
docker-compose up
Joined to the container and run the tls-toolkit script in the nifi-toolkit.
cd /opt/nifi/nifi-toolkit-1.13.0/bin &&\
sh tls-toolkit.sh standalone \
-n 'localhost' \
-C 'CN=yangeok,OU=nifi' \
-O -o $NIFI_HOME/conf
Attempt 1
Organized files in directory $NIFI_HOME/conf. Three files keystore.jks, truststore.jsk, and nifi.properties were created in folder localhost that entered the value of the option -n of the tls-toolkit script.
cd $NIFI_HOME/conf &&
cp localhost/*.jks .
The file $NIFI_HOME/conf/localhost/nifi.properties was not overwritten as it is, but only the following properties were imported as a file $NIFI_HOME/conf/nifi.properties:
nifi.web.http.host=
nifi.web.http.port=
nifiweb.https.host=localhost
nifiweb.https.port=9443
Restarted container
docker-compose restart
The container died with below error log:
Only one of the HTTP and HTTPS connectors can be configured at one time
Attempt 2
After executing the tls-toolkit script, all files a were overwritten, including file nifi.properties
cd $NIFI_HOME/conf &&
cp localhost/* .
Restarted container
docker-compose restart
The container died with the same error log
Hint
The dead container volume was also accessible, so copied and checked file nifi.properties, and when did docker-compose up or restart, it changed as follows:
The part I overwritten or modified:
nifi.web.http.host=
nifi.web.http.port=
nifi.web.http.network.interface.default=
#############################################
nifi.web.https.host=localhost
nifi.web.https.port=9443
The changed part after re-executing the container:
nifi.web.http.host=a8e283ab9421
nifi.web.http.port=9443
nifi.web.http.network.interface.default=
#############################################
nifi.web.https.host=a8e283ab9421
nifi.web.https.port=9443
I'd like to know how to execute the container with http.host, http.port empty. docker-compose.yml file is as follows:
version: '3'
services:
nifi:
build:
context: .
args:
NIFI_VERSION: ${NIFI_VERSION}
container_name: nifi
user: root
restart: unless-stopped
network_mode: bridge
ports:
- ${NIFI_HTTP_PORT}:8080/tcp
- ${NIFI_HTTPS_PORT}:9443/tcp
volumes:
- ./drivers:/opt/nifi/nifi-current/drivers
- ./templates:/opt/nifi/nifi-current/templates
- ./data:/opt/nifi/nifi-current/data
environment:
TZ: 'Asia/Seoul'
########## JVM ##########
NIFI_JVM_HEAP_INIT: ${NIFI_HEAP_INIT} # The initial JVM heap size.
NIFI_JVM_HEAP_MAX: ${NIFI_HEAP_MAX} # The maximum JVM heap size.
########## Web ##########
# NIFI_WEB_HTTP_HOST: ${NIFI_HTTP_HOST} # nifi.web.http.host
# NIFI_WEB_HTTP_PORT: ${NIFI_HTTP_PORT} # nifi.web.http.port
NIFI_WEB_HTTPS_HOST: ${NIFI_HTTPS_HOST} # nifi.web.https.host
NIFI_WEB_HTTP_PORT: ${NIFI_HTTPS_PORT} # nifi.web.https.port
Thank you

Hugo server in Docker container not reachable in Windows 10

A few days ago I started a little side project: Dockerizing my Hugo build on my Windows 10 machine. The Hugo container itself, which runs as a Linux container, was the easy part and seems to work (at least by looking at the console output
$ docker run --rm -it -p 1313:1313/tcp hugo:latest
Building sites …
Replace Autoprefixer browsers option to Browserslist config.
Use browserslist key in package.json or .browserslistrc file.
Using browsers option cause some error. Browserslist config
can be used for Babel, Autoprefixer, postcss-normalize and other tools.
If you really need to use option, rename it to overrideBrowserslist.
Learn more at:
https://github.com/browserslist/browserslist#readme
https://twitter.com/browserslist
WARN 2019/11/23 14:05:35 found no layout file for "HTML" for "section": You should create a template file which matches Hugo Layouts Lookup Rules for this combination.
| DE | EN
+------------------+----+----+
Pages | 9 | 7
Paginator pages | 0 | 0
Non-page files | 0 | 0
Static files | 25 | 25
Processed images | 0 | 0
Aliases | 1 | 0
Sitemaps | 2 | 1
Cleaned | 0 | 0
Total in 680 ms
Watching for changes in /app/{assets,content,i18n,layouts,static}
Watching for config changes in /app/config.yaml
Environment: "development"
Serving pages from memory
Running in Fast Render Mode. For full rebuilds on change: hugo server --disableFastRender
Web Server is available at http://localhost:1313/ (bind address 127.0.0.1)
Press Ctrl+C to stop
My Dockerfile the I run looks like this
FROM node:13-alpine
ENV VERSION 0.59.1
EXPOSE 1313
RUN apk add --no-cache git openssl py-pygments libc6-compat g++ curl
RUN curl -L https://github.com/gohugoio/hugo/releases/download/v${VERSION}/hugo_extended_${VERSION}_Linux-64bit.tar.gz | tar -xz \
&& cp hugo /usr/bin/hugo \
&& apk del curl \
&& hugo version
WORKDIR /app
COPY assets assets
COPY content content
COPY i18n i18n
COPY layouts layouts
COPY static static
COPY package.json package.json
COPY postcss.config.js postcss.config.js
COPY config.yaml config.yaml
RUN yarn
CMD [ "hugo", "server", "--buildDrafts","--watch" ]
The hard part for me now is to connect to the running Hugo server on my host's systems (Windows 10 Pro) browser.
I basically tried everything: localhost:1313 & http://172.17.0.2:1313/ (the container IP I get by running docker inspect <container ID>), with firewall enabled and disabled, but nothing seems to work.
To verify that it should work I ran hugo server --buildDrafts --watch directly on my host system and can access the server just fine. I also invested several hours in reading up on the issue, but none of the solutions seem to work in my case.
How can I solve this issue?
Here's your problem:
Web Server is available at http://localhost:1313/ (bind address 127.0.0.1)
Hugo is binding to the loopback address (127.0.0.1) inside the container. It does this by default because hugo serve is meant strictly as a development tool, not for actually serving pages in production. In order to avoid any security issues, it defaults to binding to the loopback interface so that you can only connect to it from the local machine.
Unfortunately, in the context of a container, localhost means "this container". So with Hugo bound to 127.0.0.1 inside a container you'll never be able to connect to it.
The solution is to provide a different bind address using the --bind option. You probably want to modify your Dockerfile so that it looks like:
CMD [ "hugo", "server", "--buildDrafts", "--watch", "--bind", "0.0.0.0" ]
This will cause hugo to bind to "all interfaces" inside the container, which should result in it working as you expect.

Training an assistant in Portuguese, Rasa NLU not running with Docker

I am working with Rasa NLU. I want to train a language model in Portuguese and have it running inside a Container. I can train the language dataset but I am not being able to get it to run.
I've created an Image from the official rasa_nlu, running with the spacy Portuguese pipeline, and placed in a Container on Docker.
I am able to use the rasa_nlu.traincommand to train the language model without problems, or at least that what it seems.
When I try to run it using the data that I trained, I get an error message complaining about missing parameters on the command that I used.
Here is the docker-compose service that I try to use when running the container:
rasa_nlu:
image: rasa_nlu_pt
volumes:
- ./models/rasa_nlu:/app/models
command:
- start
- --path
- /app/models
and it gives the following error message:
usage: run.py [-h] -d CORE [-u NLU] [-v] [-vv] [--quiet] [-p PORT]
[--auth_token AUTH_TOKEN] [--cors [CORS [CORS ...]]]
[--enable_api] [-o LOG_FILE] [--credentials CREDENTIALS]
[-c CONNECTOR] [--endpoints ENDPOINTS] [--jwt_secret JWT_SECRET]
[--jwt_method JWT_METHOD]
run.py: error: the following arguments are required: -d/--core
The same happens if I run it without other containers:
$ docker run -v $(pwd):/app/project -v $(pwd)/models/rasa_nlu:/app/models -
p 5000:5000 rasa_nlu_pt start --path app/models
usage: run.py [-h] -d CORE [-u NLU] [-v] [-vv] [--quiet] [-p PORT]
[--auth_token AUTH_TOKEN] [--cors [CORS [CORS ...]]]
[--enable_api] [-o LOG_FILE] [--credentials CREDENTIALS]
[-c CONNECTOR] [--endpoints ENDPOINTS] [--jwt_secret JWT_SECRET]
[--jwt_method JWT_METHOD]
run.py: error: the following arguments are required: -d/--core
I used the same command to run the service with the English spacy pipeline provided by Rasa and it worked as it should, but now it is giving this error message. That other information I am missing?
Depending on which pipeline your are using for your NLU, you should use the rasa/nlu:tensorflow-latest oder rasa/nlu:spacy-latest and not the rasa/nlu:latest. This will solve the problem.

startNodeManager.sh not found

I have been trying to run Oracle weblogic in Docker containers and i am facing trouble in starting the NodeManager.I ran the following command.
docker run -d --name MS1 --link wlsadmin:wlsadmin -p 8001:8001 -e ADMIN_PASSWORD=#123 \
-e MS_NAME=MS1 --volumes-from wlsadmin a5e55 createServer.sh
Under normal circumstances it is expected to start the Nodemanager.
I am able to access the weblogic console and start the Managed Server which then returns the error-
-- Warning For server MS1, the Node Manager associated with machine Machine_MS1 is not reachable
This is the part of the log file that is returned on executing the above "docker run" command :
Domain Home: /u01/oracle/user_projects/domains/base_domain
Managed Server Name: MS1
NodeManager Name:
----> 'weblogic' admin password: ctebs#123
Waiting for WebLogic Admin Server on wlsadmin:7001 to become available...
WebLogic Admin Server is now available. Proceeding...
Setting NodeManager
----> No NodeManager Name set
Node Manager Name: Machine_MS1
Node Manager Home for Container: /u01/oracle/user_projects/domains/base_domain/Machine_MS1
cp: cannot stat '/u01/oracle/user_projects/domains/base_domain /bin/startNodeManager.sh': No such file or directory
cp: cannot stat '/u01/oracle/user_projects/domains/base_domain/nodemanager/*': No such file or directory
NODEMGR_HOME_STR: NODEMGR_HOME="/u01/oracle/user_projects/domains/base_domain/Machine_MS1"
NODEMGRHOME_STR: NodeManagerHome=/u01/oracle/user_projects/domains/base_domain/Machine_MS1
DOMAINSFILE_STR: DomainsFile=/u01/oracle/user_projects/domains/base_domain/Machine_MS1/nodemanager.domains
LOGFILE_STR: LogFile=/u01/oracle/user_projects/domains/base_domain/Machine_MS1/nodemanager.log
sed: can't read /u01/oracle/user_projects/domains/base_domain/Machine_MS1/startNodeManager.sh: No such file or directory
sed: can't read /u01/oracle/user_projects/domains/base_domain/Machine_MS1/nodemanager.properties: No such file or directory
sed: can't read /u01/oracle/user_projects/domains/base_domain/Machine_MS1/nodemanager.properties: No such file or directory
sed: can't read /u01/oracle/user_projects/domains/base_domain/Machine_MS1/nodemanager.properties: No such file or directory
Starting NodeManager in background...
NodeManager started.
Connection refused (Connection refused). Could not connect to NodeManager. Check that it is running at /172.17.0.3:5556.
Starting server MS1 ...No stack trace available.
This Exception occurred at Tue Dec 12 03:38:06 GMT 2017.
weblogic.management.scripting.ScriptException: Error occurred while performing start : Server with name MS1 failed to be started
No stack trace available.
How can I get past this error message?
You can try and follow this OracleWebLogic workshop intro which points out:
The ~/docker-images/OracleWebLogic/samples/1221-domain/container-scripts has useful Bash and WLST scripts that provide three possible modes to run WebLogic Managed Servers on a Docker container. Make sure you have an AdminServer container running before starting a ManagedServer container.
The sample scripts will by default, attempt to find the AdminServer running at t3://wlsadmin:8001. You can change this.
But most importantly, the AdminServer container has to be linked with Docker's --link parameter.
Below, are the three suggestions for running ManagedServer Container within the sample 12c-domain:
Start NodeManager (Manually):
docker run -d --link wlsadmin:wlsadmin startNodeManager.sh
Start NodeManager and Create a Machine Automatically:
docker run -d --link wlsadmin:wlsadmin createMachine.sh
Start NodeManager, Create a Machine, and Create a ManagedServer Automatically
docker run -d --link wlsadmin:wlsadmin createServer.sh
See more at "Example of Image with WLS Domain", removed in commit e49bb4d in Apr. 2019, 2 yers later, since Oracle no longer supports WebLogic versions.

How to pass variable as attribute to xml configuration file in Wildfly with Docker

I'm trying to pass values from docker-compose.yml file to Wildfly configuration dynamically.
I want to have flexibility of mail configuration - just for quick change of addres, or username, or port..
In this case, I tried to do that by forwarding environment variables from docker-compose.yml, by dockerfile as arguments "-Dargumentname=$environmentvariable.
Currently wildfly interupts on start with error:
[org.jboss.as.controller.management-operation] (ServerService Thread
Pool -- 45) WFLYCTL0013: Operation ("add") failed - address: ([
("subsystem" => "mail"),
("mail-session" => "default") ]) - failure description: "WFLYCTL0097: Wrong type for ssl. Expected [BOOLEAN] but was STRING"
Same situation, if I try to pass PORT as value in outbound-socket-binding block.
I have no idea how to pass integers/booleans from docker-compose file to Wildfly configuration.
docker-compose.yml (part)
...
services:
some_service:
image: image_name:tag
environment:
- USERNAME=some_username#...
- PASSWORD=some_password
- SSL=true // I also tried with value 1
- HOST=smtp.gmail.com
- PORT=465 // also doesn't work
...
Dockerfile:
FROM some_wildfly_base_image
# install cgroup-bin package
USER root
RUN apt-get update
RUN apt-get install -y cgroup-bin
RUN apt-get install -y bc
USER jboss
ADD standalone-myapp.xml /opt/jboss/wildfly/standalone/configuration/
ADD standalone.conf /opt/jboss/wildfly/bin/
ADD modules/ /opt/jboss/wildfly/modules/
RUN wildfly/bin/add-user.sh usr usr --silent
# Set the default command to run on boot
# This will boot WildFly in the standalone mode and bind to all interface
CMD [ "/opt/jboss/wildfly/bin/standalone.sh", "-c", "standalone-myapp.xml", "-Dmail.username=$USERNAME", "-Dmail.password=$PASSWORD", "-Dmail.ssl=$SSL", "-Drm.host=$HOST", "-Drm.port=$PORT" ]
standalone-myapp.xml:
...
<subsystem xmlns="urn:jboss:domain:mail:2.0">
<mail-session name="default" jndi-name="java:jboss/mail/Default">
<smtp-server password="${mail.password}" username="${mail.username}" ssl="${mail.ssl}" outbound-socket-binding-ref="mail-smtp"/>
</mail-session>
</subsystem>
...
<outbound-socket-binding name="mail-smtp">
<remote-destination host="${rm.host}" port="465"/>
</outbound-socket-binding>
...
Almost there. In your docker file, you have defined environmental variables therefore you need to reference them as environmental variables in your wildfly config. The easiest way is to prefix your env var with env. prefix. So in your example, you have env variables HOST, SSL, USERNAME... which you can reference in standalone.xml like this:
<smtp-server password="${env.PASSWORD}" username="${env.USERNAME}" ssl="${env.SSL}" outbound-socket-binding-ref="mail-smtp"/> </mail-session>
Without env. prefix, jboss/wildfly will try to resolve the expression as jvm property, which you'd have to specify as jvm -D flag.
You can also use default value fallback in your expressions such as:
ssl="${env.SSL:true}"
This way, the ssl will be set the the value of environmental variable named SSL, and if such var does not exist, server will fallback to true.
Happy hacking

Resources