Editing static resources in Docker with Spring Boot application - docker

I am creating an application with Spring Boot and Docker. What I want to do is edit the static resources (.js and .css) without having to stop/rebuild/start the container.
I have read a few tutorials, blogs, and examples (including http://bsideup.blogspot.com/2015/04/spring-boots-fat-jars-vs-docker.html) Every example I have seen looks like I must rebuild the jar, rebuild the container, and stop/start the container to see any sort of change.
Have I overlooked some configuration that would allow me to edit a .css file and immediately see the change reflected in the browser?

Related

log4cxx logger modification in docker container

I have a c++ application that uses log4cxx for logging. log4cxx configuration will be done via XML file where the logging level and different loggers can be enabled and disabled. with installation running in VM it was easy to make the necessary modifications as needed by getting into VM and changing the XML file manually. but now we are going to run the application as a docker image which will run in the cloud, so the question is how to make modifications around the logger level as and when needed. I did try to search for this before asking here, but the solutions which are mentioned are java based, like spring boot admin, etc. which is not suitable here.

Hosting multiple Single Page Apps with Docker

We have a couple of single page apps that we want to host on a single web server. I'm only talking about the frontend part (Angular, React). The APIs run elsewhere. Each app is basically just a directory with a collection of static files (js, html, css, etc.) generated by the CI process. In fact, the build process creates one Docker image per app. Each image basically just contains a directory that contains the build artifacts.
All apps should appear in different folders on the same website:
/app1
/app2
/app3
What would be the best practice for deploying the apps? We've come up with a few strategies.
1. A single image / container
We could build a final web server image (e.g. Apache) and merge all the directories from the app images into it.
Cons: Versioning sounds like hell. Each new version of an app causes a new version of the final image. What if we want to revert to an older version of an app while a newer version of another app has already been deployed?
2. Multiple containers with a front-end reverse proxy
We could build each app image with its own built-in web server. And then route them all together with a front-end reverse proxy (nginx, traefik, etc.).
Cons: Waste of resources running multiple web servers.
3. One web server container and multiple data-only containers for the apps
Deploy each app in a separate container that provides it's app directory as a volume but does nothing else. Then there is a separate web server container that shares the same volumes in order to have access to all the files.
So far I like the 3rd variant best. Whenever a new version of an app needs to be deployed, we simply do a Docker pull on a new version of its image. But it still seems hacky. Volumes must be deleted manually. Otherwise the volume will not be seeded with the new content. Also having containers that do nothing isn't the Docker way, isn't it?
A Docker container wraps a process, but your compiled front-end applications are static files. That is, the setup you're describing here doesn't really match Docker's model.
Without Docker you could imagine deploying these to a single directory
/var/www/
app1/
index.html
css/app.css
app2/
index.html
css/app2.css
js/main.js
and serve these with a single HTTP server; you would not typically run a separate server for each front-end application.
A totally reasonable option, in fact, is to completely ignore Docker here. Even if your back-end applications are being served from containers, you can publish your front-end code (again, compiled to static files) via whatever hosting service you have conveniently available. Things like Webpack's file hashing can help support deploying updated versions of the application without breaking existing clients.
If I was using Docker I'd use either of your first two options but not the third. Running a combined all-the-front-ends HTTP server is the same pattern already discussed, except the HTTP server is in a container instead of the host. Running a dedicated HTTP server for each front-end application lets you use Docker's image versioning, and the incremental cost of an additional HTTP server isn't that expensive.
I would avoid any approach that involves named volumes or "data-only containers". Nothing ever automatically copies content into a volume, except for one specific corner case (on native Docker only, using named volumes but not any other kind of mount, only the first time you use a volume but never updating the volume content), and so you'd have to manually write code to copy content out of an image into a shared hosting location; that's more complicated and doesn't really gain you anything over directly running Webpack on the host.

Coldfusion Docker is uninstalling modules on build

I'm having issues with creating a useable docker container for a ColdFusion 2021 app. I can create the container, but everytime it is rebuilt I have to reinstall all of the modules (admin, search, etc.). This is an issue because the site that the container will be housed on will be rebuilding the container everyday.
The container is being built with docker-compose. I have tried using the installModule and importModule environmental variables, running the install command from the Dockerfile, building the container and creating a .car file to keep the settings, and disabling the secure mode using the environmental variables.
I have looked at the log, and all of the different methods used to install/import the modules are actually downloading and installing the modules. However, when the container first starts to spin up there's a section where the selected modules are installed (and the modules that are not installed are listed). That section is followed by the message that the coldfusion services are available, then it starts services, security, etc. and uninstalls (and removes) the modules. It then says that no modules are going to be installed because they are not present, and it gives the "services available" message again.
Somehow, it seems that one of the services is uninstalling and removing the module files, and none of the environmental variables (or even the setupscript) are affecting that process. I thought it might be an issue with the secure setup, but even with disabling that the problem persists. My main question is, what could be causing it to be uninstalled?
I was also looking for clarification on a couple of items:
a) all of the documentation I could find said that the .CAR file would be automatically loaded if it was in the /data folder (and in one spot it's referred to the image's /data folder). That would be at the top level with /opt and /app, right? I couldn't find an existing data folder anywhere.
b) Several of the logs and help functions mention a /docs folder, but I can't find it in the file directory. Would anyone happen to know where I can find them? It seems like that would be helpful for solving this.
Thank you in advance for any help you can give!
I don't know if the Adobe images provide a mechanism to automatically install modules every time the container rebuilds, but I recommend you look into the Ortus CommandBox-based images. They have an environment variable for the cfpm packages you want installed and CFConfig which is much more robust than car files.
https://hub.docker.com/r/ortussolutions/commandbox/
FYI, I work for Ortus Solutions.

Editing application settings of a containerized application after deployment

There might be something I fundamentally misunderstand about Docker and containers, but... my scenario is as follows:
I have created an asp.net core application and a docker image for it.
The application requires some settings being added / removed at runtime
Also some dll plugins could be added and loaded by the application
These settings would normally be stored in appsettings.json and a few other settings files located in predefined relative path (e.g. ./PluginsConfig)
I don't know how many plugins will there be and how will they be configured
I didn't want to create any kind of UI in the web application for managing settings and uploading plugins - this was to be done on the backend (I need the solution simple and cheap)
I intend to deploy this application on a single server and the admin user would be able and responsible for setting the settings, uploading plugins etc. It's an internal productivity tool - there might be many instances of this application, but they would not be related at all.
The reason I want it in docker is to have it as self-contained as possible, with all the dependencies being there.
But how would I then allow accessing, adding and editing of the plugins and config files?
I'm sure there's a pattern that would allow this scenario.
What you are looking for are volumes and bind mounts. You can bind files or directories from a host machine to a container. Thus, host and container can share files.
Sample command (bind mount - (there are also other ways))
docker container run -v /path/on/host:/path/in/container image
Detailed information for volumes and bind mounts

Is it wise to delete the default webapps from a Tomcat-based docker image?

I am containerizing an older Java web application with Docker. My Dockerfile pulls an official Tomcat image from Docker Hub (specifically, tomcat:8.5.49-jdk8-openjdk), copies my .WAR file into the webapps/ directory, and copies in some idiosyncratic configuration files and dependencies. It works.
Now I know that Tomcat comes out-of-the-box with a few directories under webapps/, including the "manager" app, and some others: ROOT, docs, examples, host-manager. I'm thinking I ought to delete these, lest one of my users access them, which might be a security risk and is unprofessional at the least.
Is it a best practice to delete those installed-by-default web apps from an official Tomcat image? Is there any downside to doing so? It seems logical to me, but a web search didn't turn up any expert opinion either way.
Every folder under webapps represents discrete Web Application contained within Tomcat Servlet Container after the server startup and deployment.
None of those web applications have any implicit or explicit correlation with either Catalina, Jasper or any other system component of Tomcat.
You should be quite OK to remove all those folders (apps) unless you need to have a Manager tool/application to manage your deployments and server. Even that can be installed again later on.

Resources