Building a generic Camel/Docker image and apply different Camel routes when container starts - docker

I've got a requirement to create some impostors/stubs/reflectors (pick your own term...) using Apache Camel. These stubs need to:
listen to a bunch of IBM MQ queues
for each queue:
grab messages off the queue when they appear
extract info from the message via simple XPath or regex extracts and construct a response via a template
wait some predefined period
send the response back on another queue
I'm no Camel expert, but I can work out how to do that much...
However, given there's going to be lots of these stubs and I want to use different subsets of these stubs in different circumstances, I want to build a generic Camel Docker image, and apply different sets of stubs to it when I start the Docker container.
If it helps clarify things further, I want to be able to store the stub definitions as uncompiled code (e.g. XML, Simple, whatever) in git repos - separate from the Docker image - and have the Docker/Camel container load those stub definitions via either volume mount/s or environment variables. Once the container starts, those stub images will persist till the container is killed off - I don't need to manipulate the stubs except when the container starts.
Key thing is that the Camel/Docker image has to be generic, not pre-built with a specific set of stub definitions.
I can handle the Docker side of things OK - what I can't work out is how to have Camel load the stub definitions when Camel starts (i.e. when the Docker container is created) rather than have the stubs loaded into e.g. a WAR at compile time.
Thanks for any help or suggestions

It can be easy implemented using the following way:
Camel/Docker image has the main cmd to run camel routes from spring xml (you can use whatever logic you want like camel standalone or spring boot etc.)
Spring XML file is read from some location which is for example hardcoded inside Docker image (like /app/config.xml for example)
You will start docker container and map required spring XML file to the /app/config.xml
If you cannot map XML files to the running container then you can implement you own small boot part that will on startup read XML content from some env variable, store it into the temporary file and run same logic for starting spring using file context. In that case, you can run docker container and pass spring XML as an env. variable.

Related

Zap2Docker custom scan rules

I'm trying to find a way to write my own OWASP zap scan rule for the purpose of running a baseline scan using zap2docker's baseline_scan.py and define the rule severity (info/warn/fail) in the docker config file specified by "-c"
When going through the ZAP repository, I have found source file of a rule e.g. 10055 and would like to create a similar thing, only in a way that I would not be forced to create my own docker image and could load this rule from within the CI/CD pipeline.
Is there any way to create a custom ZAP rule and run it as part of the baseline_scan with defined severity?
Yes ... but it might be non trivial.
You have a couple of options:
Write a new ZAP add-on containing your rule and copy it into the right place in the docker image before starting your scan
Write a script passive scan rule and configure ZAP to load it on start
For more help doing either of these things the ZAP User Group is the best place to ask: https://groups.google.com/group/zaproxy-users

Spring Cloud Data Flow - Task Properties

I'm using SCDF and i was wondering if there was any way to configure default properties for one application?
I got a task application registered in SCDF and this application gets some JDBC properties to access business database :
app.foo.export.datasource.url=jdbc:db2://blablabla
app.foo.export.datasource.username=testuser
app.foo.export.datasource.password=**************
app.foo.export.datasource.driverClassName=com.ibm.db2.jcc.DB2Driver
Do i really need to put this prop in a property file like this : (it's bit weird to define them during the launch)
task launch fooTask --propertiesFile aaa.properties
Also, we cannot use the rest API, credentials would appear in the url.
Or is there another way/place to define default business props for an application ? These props will be only used by this task.
The purpose is to have one place where OPS team can configure url and credentials without playing with the launch command.
Thank you.
Yeah, SCDF feels a bit weird in the configuration area.
As you wrote, you can register an application and create tasks, but all the configuration is passed at the first launch of the task. Speaking other way round, you can't fully install/configure a task without running it.
As soon as a task has run once, you can relaunch it without any configuration and it uses the configuration from before. The whole config is saved in the SCDF database.
However, if you try to overwrite an existing configuration property with a new value, SCDF seems to ignore the new value and continue to use the old one. No idea if this is intended by design or a bug or if we are doing something wrong.
Because we run SCDF tasks on Kubernetes and we are used to configure all infrastructure in YAML files, the best option we found was to write our own Operator for SCDF.
This operator works against the REST interface of SCDF and also compensates the weird configuration issues mentioned above.
For example the overwrite issue is solved by first deleting the configuration and recreate it with the new values.
With this operator we have reached what you are looking for: all our SCDF configuration is in a git repository and all changes are done through merge requests. Thanks to CI/CD, on the next launch, the new configuration is used.
However, a Kubernetes operator should be part of the product. Without it, SCDF on Kubernetes feels quite "alien".

How to properly configure Cygnus?

I was playing a bit with Cygnus, and I was wondering how to properly configure it. I’ve seen both agent_<id>.conf and cygnus_instance_<id>.conf files are needed. I understand the purpose of the first one, but not the second one. In addition, what about the grouping_rules.conf file? Is there any other configuration file?
File by file:
agent_<id>.conf file is the main configuration file for Cygnus. It inherits the syntax from Apache Flume, the base technology used by Cygnus, thus it is used to declare and specify the sources, channels and sinks of your Cygnus agent.
cygnus_instance_<id>.conf file is used to configure other Cygnus parameters that cannot be configured as part of the agent configuration, such as the logging file, the management interface port, etc. Cygnus service will run as many instances as cygnus_instance_<id>.conf files are configured. That's why an <id> must be provided, since this <id> will be used to find the proper agent_<id>.conf file.
grouping_rules.conf file is used when the Grouping Rules advanced feature is wanted to be used. Usually, this file may be empty (but it must exist) and Cygnus will run after all.
flume-env.sh file has been inherited from Apache Flume, and it is used in order to configure certain Flume paramters such as the classpath overwritting the default one, some Java options (-Xms, -Xmx, etc)...

Configuration of dockerized applications

How do you deal with host-specific configuration for docker containers?
Most importantly production passwords. They can't be put to container for security reasons. Also, they need to be changed on regular basis w/o redeploying anything.
I mostly use Volume containers if I really want to avoid making an image for that: http://docs.docker.com/userguide/dockervolumes/
however, I have done things like
FROM myapp
ADD password.file /etc
RUN do some cfg
to make specific images with password baked in.
That way the specific configuration list passwords is not in the general images which are re-used and known, but they are in an image that is build for and on the host that is running it. (this approach is not really very different to using puppet/chef etc to customise the files on a vm/server - it eventually needs to hit the filesystem in a safe, but repeatable way)
As one comment says, use environment variables you can pass via -e pass=abcdef. You can also save them in a fig.yml config file to avoid having to type them every time.

Windows Service running in multiple instances - best practice?

I have a project where a number of 'environments' are running simultaniously: Local development environment (VS), Dev, Test and Prod.
We now wish to expand the program suite with a 'server application' to process background assignments as calculations and mail sending.
I'm trying to find best practice for this situation.
I'm thinking that it should be a windows service.
As a result, I need to have three copies of the service running (Dev, Test and Prod) and preferable on the single server assigned as our application server. I'm thinking I can copy relevant exe to separate directories and 'somehow' instruct each service which environment it is supposed to connect to.
It's important to notice that the three services would not nessesarily be running the same release of the code.
What is the best practice for doing this?
Any input appreciated!
Anders, Denmark
Definitely sounds like Windows Services would be the right call. These services would be daemons, running independently from each other.
I recommend against creating 3 executables. Stick with just one as it is easier for deployment.
Have your exe take a command line parameter telling it which environment it should run, and fire off the appropriate part of the code.
Its then pretty easy to start, stop and query your services.
Let me know your thoughts!
Use an application config file for each installed instance of the service executable so you can set the environment to run against.

Resources