I have inherited a Microsoft Visual Studio MVC project for modification and I now need to allow users to upload files to the web server. I utilize Windows 11 and IIS Express from within MS Visual Studio for development purposes and there is no issue with IIS. However, the app runs in production on a Ubuntu-based server running NGINX as a web server.
PROBLEM: When I attempt to upload a file larger than 1MB from my browser to the production server I receive the error message "413 Request Entity Too Large." After scouring the web I have discovered: (a) NGINX invokes the 1MB limit by default; and (b) it is necessary to modify the nginx.conf file by adding the NGINX "client-max-body-size" directive.
I have located the nginx.conf file on the production server and have browsed it with Nano. However, I have stopped short of attempting to modify and save the file due to the presence of Docker. Admittedly, I know virtually nothing of Docker and, unfortunately, the principals who set up this server have long since departed the Company. Furthermore, it is unclear to me whether simply modifying the nginx.conf file and restarting NGINX on the production server will do the trick given what I presume to be the necessity to involve Docker.
As an aside, my customer utilizes Azure DevOps to facilitate collaboration. I regularly stage changes to my project using Git from within MS Visual Studio and subsequently use a Ubuntu update_app.sh script to push the changes into production. I had previously attempted to modify the nginx.conf file included with the local copy of my MVC project. The file was pushed via Azure DevOps to the production server but the modified nginx.conf file would not push to production, presumably due to the presence of Docker.
I would appreciate someone providing me with an explanation of the interaction between Docker and NGINX. I would further appreciate any tip on how to get the modified nginx.conf file pushed into production.
Thanks in advance for your assistance.
Thankfully this is pretty straightforward and NGIX has good documentation on this topic (see Copying Content and Configuration Files from the Docker Host section of Deploying NGINX and NGINX Plus on Docker).
Find the Dockerfile for the deployed solution (typically in project or solution root directory)
Copy the contents of the current NGIX configuration file (either from browsing the container image or finding configuration files within source)
Create a directory named conf within your source.
Copy the current configuration into the a file within conf and apply any changes to the configuration file.
Update Dockerfile to copy files within conf into the container where NGIX loads configuration files upon startup. Add the following line into Dockerfile after the line FROM ngix (note: line may vary but should be obvious its using ngix as base image): COPY conf {INSERT RELATIVE PATH TO CONF PARENT FOLDER}
This will copy your local configuration file into the container upon creation which NGIX will load upon startup.
Note: There are other solutions that support configuration changes without rebuilding the container.
Related
We have a couple of single page apps that we want to host on a single web server. I'm only talking about the frontend part (Angular, React). The APIs run elsewhere. Each app is basically just a directory with a collection of static files (js, html, css, etc.) generated by the CI process. In fact, the build process creates one Docker image per app. Each image basically just contains a directory that contains the build artifacts.
All apps should appear in different folders on the same website:
/app1
/app2
/app3
What would be the best practice for deploying the apps? We've come up with a few strategies.
1. A single image / container
We could build a final web server image (e.g. Apache) and merge all the directories from the app images into it.
Cons: Versioning sounds like hell. Each new version of an app causes a new version of the final image. What if we want to revert to an older version of an app while a newer version of another app has already been deployed?
2. Multiple containers with a front-end reverse proxy
We could build each app image with its own built-in web server. And then route them all together with a front-end reverse proxy (nginx, traefik, etc.).
Cons: Waste of resources running multiple web servers.
3. One web server container and multiple data-only containers for the apps
Deploy each app in a separate container that provides it's app directory as a volume but does nothing else. Then there is a separate web server container that shares the same volumes in order to have access to all the files.
So far I like the 3rd variant best. Whenever a new version of an app needs to be deployed, we simply do a Docker pull on a new version of its image. But it still seems hacky. Volumes must be deleted manually. Otherwise the volume will not be seeded with the new content. Also having containers that do nothing isn't the Docker way, isn't it?
A Docker container wraps a process, but your compiled front-end applications are static files. That is, the setup you're describing here doesn't really match Docker's model.
Without Docker you could imagine deploying these to a single directory
/var/www/
app1/
index.html
css/app.css
app2/
index.html
css/app2.css
js/main.js
and serve these with a single HTTP server; you would not typically run a separate server for each front-end application.
A totally reasonable option, in fact, is to completely ignore Docker here. Even if your back-end applications are being served from containers, you can publish your front-end code (again, compiled to static files) via whatever hosting service you have conveniently available. Things like Webpack's file hashing can help support deploying updated versions of the application without breaking existing clients.
If I was using Docker I'd use either of your first two options but not the third. Running a combined all-the-front-ends HTTP server is the same pattern already discussed, except the HTTP server is in a container instead of the host. Running a dedicated HTTP server for each front-end application lets you use Docker's image versioning, and the incremental cost of an additional HTTP server isn't that expensive.
I would avoid any approach that involves named volumes or "data-only containers". Nothing ever automatically copies content into a volume, except for one specific corner case (on native Docker only, using named volumes but not any other kind of mount, only the first time you use a volume but never updating the volume content), and so you'd have to manually write code to copy content out of an image into a shared hosting location; that's more complicated and doesn't really gain you anything over directly running Webpack on the host.
I have a WebLogic 12.1.2 image that I'm looking to auto deploy an application ear on to.
I can do this when building the image using the Dockerfile i.e.
COPY some.ear /u01/domains/mydomain/autodeploy
The container run using this image runs fine and I can reach my application REST interface with Postman.
However, if I restart that container, the autodeployed application no longer exists in the container's WebLogic console and hence Postman calls fail.
The ear file is still in the autodeploy directory on restart.
Has anyone seen this behaviour before?
I've found a little hack around this.
My container has the WebLogic autodeploy directory mounted to a host directory containing the ear. If I rename the ear e.g. rename some.ear some.ear.1 and rename it back to some.ear.1 to some .ear or cut/paste the ear to/from another directory then WebLogic can pick it up. It's not ideal but it works in case anyone else comes across the same issue.
When running an NGINX server that hosts static content, which file permissions should be set? The main consideration is to have the safest configuration.
I currently have two dockerized NGINX servers behind a reverse-proxy, one of them containing files with 1000:1000 (copied directly from the host machine), the other with root:root (copied from a multi-stage build). The current configuration works, but I would like to know the best practice.
Folders need read and execute (to traverse). Static files just need read. I assume your server is not running scripts. Ownership should not be root but a user/group usable by nginx.
I've been browsing Docker Hub and I'm trying to determine the quality of builds.
I've got 2 questions:
Question 1
I came accross this image: https://hub.docker.com/r/perfectweb/production/~/dockerfile/
It uses a lot of configuration rewriting inside the image, wouldn't it be better to just copy external configuration files to the container? Like described here: Separate specific configuration in Dockerfile.
Question 2
One of the most-starred images for lemp is this one: https://hub.docker.com/r/stenote/docker-lemp/
It has a warning not to use it for production (because of the empty root password for MySQL) but I'm wondering: are there other reasons why this image is not production safe ?
wouldn't it be better to just copy external configuration files to the container?
If you copied from the disk the same php.ini already modified, that file might overwrite some of the evolution introduced by another version of php in php.ini.
So the current process (rewrites) allows for php.ini to evolve (when installing a new version of php), while keeping the rewrite visible in the Dockerfile.
are there other reasons why this image is not production safe ?
Another reason might be that, by default, those services are accessible in http, not https.
I am installing graphite via a docker container.
I have seen that whisper files should not be saved in the container.
So I will be using a data volume from docker to save these on the host machine.
My question is is there anything else I should be saving on the host (I know this is subjective so I guess Im looking for recommendations on whats important)?
Don't believe I need configuration e.g. carbon conf as this will come from my installation
So I'm thinking are there any other files from graphite I need (e.g log files etc)?
What is your reason for keeping log files? Though you do need the directory structure in place. Logging defaults to /opt/graphite/storage/logs. In here you have carbon-cache/ and webapp/ directories. The log directory for the webapp is set in the config- local_settings.py, whereas carbon uses carbon.conf. The configs are well documented so you can look into them for specific information.
Apart from configs that are generated during installation the only other 'file' crucial for the webapp to work is graphite.db in the /opt/graphtie/storage. It is used internally by the django webapp for housekeeping information such as user-auth etc. It gets generated by python manage.py --syncdb so i believe you can generate it again at the target system.