We have a java application. It connects to the oracle database. It has three properties files and the customer changes value of those properties files when we deployed our application to the customer site. We have another python script that allows customer to modify those properties files and start the main app after configuration is done.
Now we want to containerized our java app and want to use swarm mode. the only problem is to find the easy way for customer to edit those properties files before I start the container.
Any ideas docker Gurus ?
Help is very much appreciated.
Related
There might be something I fundamentally misunderstand about Docker and containers, but... my scenario is as follows:
I have created an asp.net core application and a docker image for it.
The application requires some settings being added / removed at runtime
Also some dll plugins could be added and loaded by the application
These settings would normally be stored in appsettings.json and a few other settings files located in predefined relative path (e.g. ./PluginsConfig)
I don't know how many plugins will there be and how will they be configured
I didn't want to create any kind of UI in the web application for managing settings and uploading plugins - this was to be done on the backend (I need the solution simple and cheap)
I intend to deploy this application on a single server and the admin user would be able and responsible for setting the settings, uploading plugins etc. It's an internal productivity tool - there might be many instances of this application, but they would not be related at all.
The reason I want it in docker is to have it as self-contained as possible, with all the dependencies being there.
But how would I then allow accessing, adding and editing of the plugins and config files?
I'm sure there's a pattern that would allow this scenario.
What you are looking for are volumes and bind mounts. You can bind files or directories from a host machine to a container. Thus, host and container can share files.
Sample command (bind mount - (there are also other ways))
docker container run -v /path/on/host:/path/in/container image
Detailed information for volumes and bind mounts
I am containerizing an older Java web application with Docker. My Dockerfile pulls an official Tomcat image from Docker Hub (specifically, tomcat:8.5.49-jdk8-openjdk), copies my .WAR file into the webapps/ directory, and copies in some idiosyncratic configuration files and dependencies. It works.
Now I know that Tomcat comes out-of-the-box with a few directories under webapps/, including the "manager" app, and some others: ROOT, docs, examples, host-manager. I'm thinking I ought to delete these, lest one of my users access them, which might be a security risk and is unprofessional at the least.
Is it a best practice to delete those installed-by-default web apps from an official Tomcat image? Is there any downside to doing so? It seems logical to me, but a web search didn't turn up any expert opinion either way.
Every folder under webapps represents discrete Web Application contained within Tomcat Servlet Container after the server startup and deployment.
None of those web applications have any implicit or explicit correlation with either Catalina, Jasper or any other system component of Tomcat.
You should be quite OK to remove all those folders (apps) unless you need to have a Manager tool/application to manage your deployments and server. Even that can be installed again later on.
I'm using the SoftwareCollections MariaDB container and I can't seem to find a way to initialize the database with some users and data.
The official mariadb container provides the very handy /docker-entrypoint-initdb.d directory. The container runs all .sql and .sql.gz files at database initialization, but this type of functionality seems to be missing from the software collections image.
Why was this functionality not included with software collections? Is it included and I'm just not looking in the right place?
Typically docker containers allows to setup single admin user and password. You can use this later to connect and seed any data you need.
This can be done on application level by tools like liquibase or just by kubernetes job depending on your use case.
Stuck with few limitations to build an effective cluster based design for a distributed app (using Docker + OpenShift Origin)
To give a brief idea about my current architecture, we have multiple war and micro-services and all these apps following common approach to read property files from external folder (outside of war).
Example: /usr/local/share/appconfigs and my app refers from classpath.
We are using token based approach to generate these property files, based on environment. These files will be available in github.
To Dockerize our apps (war & services), I am building these properties first then copying them to catalina_base to make them available in classpath in Dockerfile.
Now to make my app so flexible and run multiple instances for different environments (example: DEV, INT, PREFIX) I am considering spring-cloud-config (server).
Brief summary,
Step 1) My externalized properties are built and available in github (Example: appconfigproperties)
Step 2) One Docker container runs with spring-cloud-server-config to serve property files based on profile key
Step 3) Run Docker App (war & other services) in another container using above properties.
Now Limitations here
I cannot use spring-cloud-config-client in my app, because it is not built on Spring-Boot. So I have left with only option that is REST based api to get properties
But I need the properties from one running container (which are served by spring-cloud-config-server), in another container App in its Dockerfile to copy to its catalina-base folder (so technically before app running).
If I want to run myty app in Dev or Int, I just need to run a container with few clicks, to make this distributed app completely configuration driven and on demand.
Appreciate your time to read and suggest possible changes to the solution if needed.
I would like to achieve the following. I have a C# server application which is run by a Windows Service. The service currently requires that the server application is located in a specific directory.
Is it possible to create a Windows Service that takes a directory at start and run the application in that directory? How do you do that?
Can such a "configurable" service be used to start multiple application (executables with same name but located in different directories). This would be used to run different versions of a server application in parallel. Or do you need one service per running instance?
Yes, simply set the context to reflect the desired environment.To do this use Environment.SetEnvironmentVariable.
A single service can start many applications, each with its environment. Use a configuration file or persistent data in the registry.