Transforming Web.Config in a Windows Docker container - docker

I have a .NET app that uses Octopus to deploy to the server. In this process, Octopus updates some of the values in the Web.Config (API keys, database connection string etc). I'm moving this app into a container on the same server, and the image has been built before Octopus gets anywhere near it.
How do I update the Web.Config based on the environment in which the docker run command is being triggered?
I've found this blog post, which seems to necessitate a config transformation file.
I could also pass everything in as environment variables to the container, but then I'd have to change how the app accesses them, which I don't want to do because there are lots of other apps to be done.

Related

Hosting multiple Single Page Apps with Docker

We have a couple of single page apps that we want to host on a single web server. I'm only talking about the frontend part (Angular, React). The APIs run elsewhere. Each app is basically just a directory with a collection of static files (js, html, css, etc.) generated by the CI process. In fact, the build process creates one Docker image per app. Each image basically just contains a directory that contains the build artifacts.
All apps should appear in different folders on the same website:
/app1
/app2
/app3
What would be the best practice for deploying the apps? We've come up with a few strategies.
1. A single image / container
We could build a final web server image (e.g. Apache) and merge all the directories from the app images into it.
Cons: Versioning sounds like hell. Each new version of an app causes a new version of the final image. What if we want to revert to an older version of an app while a newer version of another app has already been deployed?
2. Multiple containers with a front-end reverse proxy
We could build each app image with its own built-in web server. And then route them all together with a front-end reverse proxy (nginx, traefik, etc.).
Cons: Waste of resources running multiple web servers.
3. One web server container and multiple data-only containers for the apps
Deploy each app in a separate container that provides it's app directory as a volume but does nothing else. Then there is a separate web server container that shares the same volumes in order to have access to all the files.
So far I like the 3rd variant best. Whenever a new version of an app needs to be deployed, we simply do a Docker pull on a new version of its image. But it still seems hacky. Volumes must be deleted manually. Otherwise the volume will not be seeded with the new content. Also having containers that do nothing isn't the Docker way, isn't it?
A Docker container wraps a process, but your compiled front-end applications are static files. That is, the setup you're describing here doesn't really match Docker's model.
Without Docker you could imagine deploying these to a single directory
/var/www/
app1/
index.html
css/app.css
app2/
index.html
css/app2.css
js/main.js
and serve these with a single HTTP server; you would not typically run a separate server for each front-end application.
A totally reasonable option, in fact, is to completely ignore Docker here. Even if your back-end applications are being served from containers, you can publish your front-end code (again, compiled to static files) via whatever hosting service you have conveniently available. Things like Webpack's file hashing can help support deploying updated versions of the application without breaking existing clients.
If I was using Docker I'd use either of your first two options but not the third. Running a combined all-the-front-ends HTTP server is the same pattern already discussed, except the HTTP server is in a container instead of the host. Running a dedicated HTTP server for each front-end application lets you use Docker's image versioning, and the incremental cost of an additional HTTP server isn't that expensive.
I would avoid any approach that involves named volumes or "data-only containers". Nothing ever automatically copies content into a volume, except for one specific corner case (on native Docker only, using named volumes but not any other kind of mount, only the first time you use a volume but never updating the volume content), and so you'd have to manually write code to copy content out of an image into a shared hosting location; that's more complicated and doesn't really gain you anything over directly running Webpack on the host.

What is the benefit of dockerize the SPA web app

I dockerize my SPA web app by using nginx as base image then copy my nginx.conf and build files. As Dockerize Vue.js App mention I think many dockerizing SPA solutions are similar.
If I don't use docker I will first build SPA code then copy the build files to nginx root directory (After install/set up nginx I barely change it at all)
So what's the benefit of dockerizing SPA?
----- update -----
One answer said "If the app is dockerized each time you are releasing a new version of your app the Nginx server gets all the new updates available for it." I don't agree with that at all. I don't need the latest version of nginx, after all I only use the basic feature of nginx. Some of my team members just use the nginx version bundled with linux when doing development. If my docker image uses the latest ngixn it actually creates the different environment than the development environment.
I realize my question will be probably closed b/c it will be seen as opinion based. But I have googled it and can't find a satisfied answer.
If I don't use docker I will first build SPA code then copy the build files to nginx root directory (After install/set up nginx I barely change it at all)
This is a security concern... fire and forget is what it seems is being done here regarding the server.
If the app is dockerized each time you are releasing a new version of your app the Nginx server gets all the new updates available for it.
Bear in mind that if your App does not release new versions in a weekly bases then you need to consider to rebuild the docker images at least weekly in order to get the updates and keep everything up to date with the last security patches.
So what's the benefit of dockerizing SPA?
Same environment across development, staging and production. This is called 100% parity across all stages were you run your app, and this true for no matter what type of application you deploy.
If something doesn't work in production you can pull the docker image by the digest and run it locally to debug and try to understand where is the problem. If you need to ssh to a production server it means that you automation pipeline have failed or maybe your are not even using one...
Tools like Webpack compile Javascript applications to static files that can then be served with your choice of HTTP server. Once you’ve built your SPA, the built files are indistinguishable from pages like index.html and other assets like image files: they’re just static files that get served by some HTTP server.
A Docker container encapsulates a single running process. It doesn’t really do a good job at containing these static files per se.
You’ll frequently see “SPA Docker containers” that run a developer-oriented HTTP server. There’s no particular benefit to doing this, though. You can get an equally good developer experience just by developing your application locally, running npm run build or whatever to create a dist directory, and then publishing that the same way you’d publish other assets. An automation pipeline is helpful here, but this isn’t a task Docker makes wildly simpler.
(Also remember when you do this that the built application runs on the user’s browser. That means it can’t see any of the Docker-internal networking machinery: it can’t use the Docker-internal IP addresses and it can’t use the built-in Docker DNS service. Everything it reaches has to be on docker run -p published ports and it has to use a DNS name that reaches the host. The browser literally has no idea Docker is involved in this at all.)
There are a few benefits.
Firstly, building a Docker image means you are explicitly stating what your application's canonical run-time is - this version of nginx, with that SSL configuration, whatever. Changes to the run-time are in source control, so you can upgrade predictably and reversibly. You say you don't want "the latest version" - but what if that latest version patches a critical security vulnerability? Being able to upgrade predictably, on "disposable" containers means you upgrade when you want to.
Secondly, if the entire development team uses the same Docker image, you avoid the challenges with different configurations giving the "it works on my machine" response to bugs - in SPAs, different configurations of nginx can lead to different behaviour. New developers who join the team don't have to install or configure anything, and can use any device they want - they can be certain that what runs in Docker is the same as it is for all the other developers.
Thirdly, by having all your environments containerized (not just development, but test and production), you make it easy to move versions through the pipeline and only change the environment-specific values.
Now, for an SPA, these benefits are real, but may not outweigh the cost and effort of creating and maintaining Docker images - inevitably, the Docker image becomes a bottleneck and the first thing people blame. I'd only invest in it if you see lots of environment-specific pain (suggesting having a consistent run-time environment is necessary), or if you see lots of "it works on my machine" type of bug.

How to get an XML file via HTTP when Jenkins runs Codeception

Our developers develop in Virtualbox VMs which have a web server configured to point at the codebase they are working on. When they run Codeception tests, the code calls the local web server requesting a data file from within the codebase and proceeds with the test. Works well.
The problem I'm running into is running the tests on our Jenkins Server (using a Declarative Pipeline, if that matters). Jenkins installs the code in a temporary directory (/var/lib/jenkins/workspace/). Obviously I can't point the web server on Jenkins to point to the temp directory (especially when I'm building different branches in different temporary directories) so the tests fail when trying to fetch the data file.
The reason we fetch the data file via HTTP is because the code we are testing does precisely that in production; fetches code from another web server not under our control and we want to mimic the functionality in our test.
How should this be handled? Copy the data files to a common directory outside of Jenkins that the web server points to? If so, how do I handle different developers using the same data file with different values? Or do I just tell them "Don't do that?"
Any suggestions?

Configuring Docker multi-container App with Spring Cloud Server for OpenShift Origin

Stuck with few limitations to build an effective cluster based design for a distributed app (using Docker + OpenShift Origin)
To give a brief idea about my current architecture, we have multiple war and micro-services and all these apps following common approach to read property files from external folder (outside of war).
Example: /usr/local/share/appconfigs and my app refers from classpath.
We are using token based approach to generate these property files, based on environment. These files will be available in github.
To Dockerize our apps (war & services), I am building these properties first then copying them to catalina_base to make them available in classpath in Dockerfile.
Now to make my app so flexible and run multiple instances for different environments (example: DEV, INT, PREFIX) I am considering spring-cloud-config (server).
Brief summary,
Step 1) My externalized properties are built and available in github (Example: appconfigproperties)
Step 2) One Docker container runs with spring-cloud-server-config to serve property files based on profile key
Step 3) Run Docker App (war & other services) in another container using above properties.
Now Limitations here
I cannot use spring-cloud-config-client in my app, because it is not built on Spring-Boot. So I have left with only option that is REST based api to get properties
But I need the properties from one running container (which are served by spring-cloud-config-server), in another container App in its Dockerfile to copy to its catalina-base folder (so technically before app running).
If I want to run myty app in Dev or Int, I just need to run a container with few clicks, to make this distributed app completely configuration driven and on demand.
Appreciate your time to read and suggest possible changes to the solution if needed.

Docker: where should I put container configuration and war file

Recently I was trying to figure out how a docker workflow looks like.
What I thought is, devs should push images locally and in other environments servers should just directly pull that image and run it.
But I could see a lot of public images allows people to put configurations outside the container.
For example, in official elasticsearch image, there is a command as follows:
docker run -d -v "$PWD/config":/usr/share/elasticsearch/config elasticsearch
So what is the point of putting configuration outside the container instead of running local containers quickly?
My argument is
if I put configuration inside a custom image, in testing environment or production, the server just need to pull that same image which is already built.
if I put configuration outside the image, in other environments, there will be another process to get that configuration from somewhere. Sure we could use git to source control that, but is this a tedious and useless effort to manage it? And installing third party libraries is also required.
Further question:
Should I put the application file (for example, war file) inside web server container or outside it?
When you are doing development, configuration files may change often; so rather than keep rebuilding the containers, you may use a volume instead.
If you are in production and need dozens or hundreds of the same container, all with slightly different configuration files, it is easy to have one single image and have diverse configuration files living outside (e.g. use consul, etcd, zookeeper, ... or VOLUME).

Resources