Running Tomcat with different resources for different environments - docker

I have an application-level context.xml with three different databases connections, and my application successfully connects and works fine against those databases. The war file is added to a Tomcat Docker image, and the container runs great.
But, what I really need is the ability to bring up my WAR file with different context.xml files in different environments (Development, QA, and Production). Each environment has its own set of three database connections (i.e. unique URLs/usernames/passwords but the same resource names).
Is there a mechanism in Tomcat where I can pass an environment variable into the Tomcat container at startup, and specify which context file to use? e.g. if I had META-INF/context_dev.xml, META-INF/context_qa.xml, and META-INF/context_prod.xml.
Or, is there some other different kind of mechanism I should be using to have one Docker image that works with three different sets of database resources?
Thanks,
John

In containers and docker as well as kubernetes, ENV variables are the way to go to pass configuration to your container.
You set your tomcat so it also take them into account and use the ENV name in the file.
For tomcat (independtly of having containers or not) an explanation on how to pass env variables is shown there: Tomcat 8 - context.xml use Environment Variable in Datasource
How to pass env variable to your container: How do I pass environment variables to Docker containers?
Then you can pass your env variable in command line with RAW docker or use an .env file. Changing the command like with different values for the ENV variable or just a different .env file to use will do the trick.

I made a solution that will work for me with minimal changes, taking as inspiration the suggestions I received above. Basically, I put ALL the resources for ALL the environments in context.xml, then I named them like such:
<Resource
name="${PRODENV}/mydb1"
XXXXXXX
/>
<Resource
name="${PRODENV}/mydb2"
XXXXXXX
/>
<Resource
name="${QAENV}/mydb1"
XXXXXXX
/>
<Resource
name="${QAENV}/mydb2"
XXXXXXX
/>
Then, when I start the container, I just add -DPRODENV=jdbc or -DQAENV=jdbc to the JAVA_OPTS environment variable. Only the two that I want get loaded, as appropriate. The rest are just never referenced.

Related

Multiple Jenkins on same tomcat server in linux

I want to set up 2 jenkins on same tomcat server -8.5.34 in linux.
I Downloaded https://get.jenkins.io/war-stable/latest/jenkins.war and placed in webapps folders of tomcat as jen-dev and jen-qa
However, for both Jenkins, I want to use different Jenkins home locations:
JENKINS1_HOME : /opt/jen-dev JENKINS2_HOME : /opt/jen-qa
If I try setting below variable in shell and start tomcat using bin/startup.sh, jen-dev is working fine as http://jenkins.dev.com/jen-dev.
export JENKINS_HOME="/opt/jen-dev"
How can I customize this installation to include second Jenkins home and run on same server as http://jenkins.dev.com/jen-qa?
There are three ways to set the JENKINS_HOME parameter (cf. Jenkins Wiki):
as a system environment variable,
as a system property,
as a JNDI environment entry.
The first two options will apply to the entire Tomcat server, so you need to use JNDI. Create a descriptor file $CATALINA_BASE/conf/Catalina/localhost/jen-dev.xml with content:
<Context>
<Environment type="java.lang.String" override="false"
name="JENKINS_HOME" value="/opt/jen-dev" />
</Context>
and define a similar descriptor for the other Jenkins instance.

NextJS: Prevent env vars to be required on build time

We are working on a Dockerized NextJS application that is thought to be built once and deployed to several environments for which we will have different configuration. This configuration is to be set in the Docker container when deployed as environment variables.
In order to achieve this, we are using next.config.js file, splitting the vars on serverRuntimeConfig and publicRuntimeConfig as suggested here, and we are getting the values for the environment variables from process.env. i.e.:
module.exports = {
serverRuntimeConfig: {
mySecret: process.env.MY_SECRET,
secondSecret: process.env.SECOND_SECRET,
},
publicRuntimeConfig: {
staticFolder: process.env.STATIC_FOLDER_URL,
},
}
The problem we have is that these variables are not set on build time (when we run next build), as they are environment specific and supposed to be set on deployment. Because of this, the build fails complaining about the missing variables.
Making a build per environment is not an option: as referred before, we want to build it once (with next build), put the output of the build in a docker container, and use that docker container deploy in several environments.
Is there any way to solve this so that the application builds without environment vars and we pass them afterwards on runtime (deployment)?
We finally found the issue.
We were importing code in a helper that was being used in the isomorphic side and was relaying on serverRuntimeConfig variables, being then required on build time in order to create the bundle.
Removing the import from the helper fixed the issue.

EJBCA on Docker: use environmental variables in .properties files

I am trying to configure EJBCA 6.15.2.1 on Wildfly 12.0.0.Final inside a Docker container with the help of EJBCA .properties files. In $EJBCA_HOME/conf/externalra-gui.properties.sample there is a comment showing that one of the default settings is: appserver.home=${env.APPSRV_HOME}. I tried to set other options in a similar way, e. g. in database.properties: database.datasource=${env.WF_DATASRC}.
I run ant clean deployear and it didn't deploy my EJBCA instance properly at first - server.log showed that there is no datasource under the name "${env.WF_DATASRC}". It proceeded correctly after I'd changed the line to: database.datasource=ejbcads, which is the exact value of the variable and the name of the data source inside the WildFly server.
I get similar errors during further installation steps. Is there another way of setting EJBCA configuration using environment variables?

How to setup divolte.io for multiple websites on the same server?

I've setup a data pipeline using divolte.io to stream click data from website to a server. I'm not sure how can I do this for multiple websites because all the streams can get mixed up. Any ideas on how to do this?
On the same server, you need to bind to different ports
Create more than one config file, setting divolte.global.server.port to different values, then run the application with those configs.
In order to set a new config file, it actually needs to be in it's own directory
Divolte Collector will try to find configuration files at startup in the configuration directory. Typically this is the conf/ directory nested under the Divolte Collector installation. Divolte Collector will try to locate the configuration directory at ../conf relative to the startup script. The configuration directory can be overridden by setting the DIVOLTE_CONF_DIR environment variable. If set, the value will be used as configuration directory
Alternatively, you could run the exact same config within many containers/VMs, then use port mappings around that

specify env settings in command line

I have a node that runs several applications. These applications each have specific env settings. When I generate a release I start my node by just running ./rel/mynode/bin/mynode start. Is there an option that I could add to this command to override apps' env settings?
To answer your question: No, there is no parameter that you can pass into that command to load a different application env file.
However, if you are trying to load a different config file, for example a development file vs. a production file, you should check out how to do dynamic configuration with rebar.
I use it for running my application between different configured environments (production, and local testing).
I don't quite get what you mean by env settings. If you mean the applications configuration parameters that are set in the {Par,Val} tuples of the key env in the .app files then these can also be overridden in a system configuration file or directly in the command line. See the Configuring an Application section.

Resources