When running an NGINX server that hosts static content, which file permissions should be set? The main consideration is to have the safest configuration.
I currently have two dockerized NGINX servers behind a reverse-proxy, one of them containing files with 1000:1000 (copied directly from the host machine), the other with root:root (copied from a multi-stage build). The current configuration works, but I would like to know the best practice.
Folders need read and execute (to traverse). Static files just need read. I assume your server is not running scripts. Ownership should not be root but a user/group usable by nginx.
Related
We have a couple of single page apps that we want to host on a single web server. I'm only talking about the frontend part (Angular, React). The APIs run elsewhere. Each app is basically just a directory with a collection of static files (js, html, css, etc.) generated by the CI process. In fact, the build process creates one Docker image per app. Each image basically just contains a directory that contains the build artifacts.
All apps should appear in different folders on the same website:
/app1
/app2
/app3
What would be the best practice for deploying the apps? We've come up with a few strategies.
1. A single image / container
We could build a final web server image (e.g. Apache) and merge all the directories from the app images into it.
Cons: Versioning sounds like hell. Each new version of an app causes a new version of the final image. What if we want to revert to an older version of an app while a newer version of another app has already been deployed?
2. Multiple containers with a front-end reverse proxy
We could build each app image with its own built-in web server. And then route them all together with a front-end reverse proxy (nginx, traefik, etc.).
Cons: Waste of resources running multiple web servers.
3. One web server container and multiple data-only containers for the apps
Deploy each app in a separate container that provides it's app directory as a volume but does nothing else. Then there is a separate web server container that shares the same volumes in order to have access to all the files.
So far I like the 3rd variant best. Whenever a new version of an app needs to be deployed, we simply do a Docker pull on a new version of its image. But it still seems hacky. Volumes must be deleted manually. Otherwise the volume will not be seeded with the new content. Also having containers that do nothing isn't the Docker way, isn't it?
A Docker container wraps a process, but your compiled front-end applications are static files. That is, the setup you're describing here doesn't really match Docker's model.
Without Docker you could imagine deploying these to a single directory
/var/www/
app1/
index.html
css/app.css
app2/
index.html
css/app2.css
js/main.js
and serve these with a single HTTP server; you would not typically run a separate server for each front-end application.
A totally reasonable option, in fact, is to completely ignore Docker here. Even if your back-end applications are being served from containers, you can publish your front-end code (again, compiled to static files) via whatever hosting service you have conveniently available. Things like Webpack's file hashing can help support deploying updated versions of the application without breaking existing clients.
If I was using Docker I'd use either of your first two options but not the third. Running a combined all-the-front-ends HTTP server is the same pattern already discussed, except the HTTP server is in a container instead of the host. Running a dedicated HTTP server for each front-end application lets you use Docker's image versioning, and the incremental cost of an additional HTTP server isn't that expensive.
I would avoid any approach that involves named volumes or "data-only containers". Nothing ever automatically copies content into a volume, except for one specific corner case (on native Docker only, using named volumes but not any other kind of mount, only the first time you use a volume but never updating the volume content), and so you'd have to manually write code to copy content out of an image into a shared hosting location; that's more complicated and doesn't really gain you anything over directly running Webpack on the host.
I have inherited a Microsoft Visual Studio MVC project for modification and I now need to allow users to upload files to the web server. I utilize Windows 11 and IIS Express from within MS Visual Studio for development purposes and there is no issue with IIS. However, the app runs in production on a Ubuntu-based server running NGINX as a web server.
PROBLEM: When I attempt to upload a file larger than 1MB from my browser to the production server I receive the error message "413 Request Entity Too Large." After scouring the web I have discovered: (a) NGINX invokes the 1MB limit by default; and (b) it is necessary to modify the nginx.conf file by adding the NGINX "client-max-body-size" directive.
I have located the nginx.conf file on the production server and have browsed it with Nano. However, I have stopped short of attempting to modify and save the file due to the presence of Docker. Admittedly, I know virtually nothing of Docker and, unfortunately, the principals who set up this server have long since departed the Company. Furthermore, it is unclear to me whether simply modifying the nginx.conf file and restarting NGINX on the production server will do the trick given what I presume to be the necessity to involve Docker.
As an aside, my customer utilizes Azure DevOps to facilitate collaboration. I regularly stage changes to my project using Git from within MS Visual Studio and subsequently use a Ubuntu update_app.sh script to push the changes into production. I had previously attempted to modify the nginx.conf file included with the local copy of my MVC project. The file was pushed via Azure DevOps to the production server but the modified nginx.conf file would not push to production, presumably due to the presence of Docker.
I would appreciate someone providing me with an explanation of the interaction between Docker and NGINX. I would further appreciate any tip on how to get the modified nginx.conf file pushed into production.
Thanks in advance for your assistance.
Thankfully this is pretty straightforward and NGIX has good documentation on this topic (see Copying Content and Configuration Files from the Docker Host section of Deploying NGINX and NGINX Plus on Docker).
Find the Dockerfile for the deployed solution (typically in project or solution root directory)
Copy the contents of the current NGIX configuration file (either from browsing the container image or finding configuration files within source)
Create a directory named conf within your source.
Copy the current configuration into the a file within conf and apply any changes to the configuration file.
Update Dockerfile to copy files within conf into the container where NGIX loads configuration files upon startup. Add the following line into Dockerfile after the line FROM ngix (note: line may vary but should be obvious its using ngix as base image): COPY conf {INSERT RELATIVE PATH TO CONF PARENT FOLDER}
This will copy your local configuration file into the container upon creation which NGIX will load upon startup.
Note: There are other solutions that support configuration changes without rebuilding the container.
There might be something I fundamentally misunderstand about Docker and containers, but... my scenario is as follows:
I have created an asp.net core application and a docker image for it.
The application requires some settings being added / removed at runtime
Also some dll plugins could be added and loaded by the application
These settings would normally be stored in appsettings.json and a few other settings files located in predefined relative path (e.g. ./PluginsConfig)
I don't know how many plugins will there be and how will they be configured
I didn't want to create any kind of UI in the web application for managing settings and uploading plugins - this was to be done on the backend (I need the solution simple and cheap)
I intend to deploy this application on a single server and the admin user would be able and responsible for setting the settings, uploading plugins etc. It's an internal productivity tool - there might be many instances of this application, but they would not be related at all.
The reason I want it in docker is to have it as self-contained as possible, with all the dependencies being there.
But how would I then allow accessing, adding and editing of the plugins and config files?
I'm sure there's a pattern that would allow this scenario.
What you are looking for are volumes and bind mounts. You can bind files or directories from a host machine to a container. Thus, host and container can share files.
Sample command (bind mount - (there are also other ways))
docker container run -v /path/on/host:/path/in/container image
Detailed information for volumes and bind mounts
I'm new to docker so I have a very simple question: Where do you put your config files?
Say you want to install mongodb. You install it but then you need to create/edit a file. I don't think they fit on github since they're used for deployment though it's not a bad place to store the files.
I was just wondering if docker had any support for storing such config files so you can add them as part of running an image.
Do you have to use swarms?
Typically you'll store the configuration files on the Docker host and then use volumes to bind mount your configuration files in the container. This allows you to separately manage the configuration file from the running containers. When you make a change to the configuration, you can just restart the container.
You can then use a configuration management tool like Salt, Puppet, or Chef to manage copying/storing the configuration file onto the Docker host. Things like passwords can be managed by the secrets capabilities of the tool. When set up this way, changing a configuration file just means you need to restart your container and not build a new image.
Yes, in most cases you definitely want to keep your Dockerfiles in version control. If your org (or you personally) use GitHub for this, that's fine, but stick them wherever your other repos are. One of the main ideas in DevOps is to treat infrastructure as code. In fact, one of the main benefits of something like a Dockerfile (or a chef cookbook, or a puppet file, etc) is that it is "used for deployment" but can also be version-controlled, meaningfully diffed, etc.
I am installing graphite via a docker container.
I have seen that whisper files should not be saved in the container.
So I will be using a data volume from docker to save these on the host machine.
My question is is there anything else I should be saving on the host (I know this is subjective so I guess Im looking for recommendations on whats important)?
Don't believe I need configuration e.g. carbon conf as this will come from my installation
So I'm thinking are there any other files from graphite I need (e.g log files etc)?
What is your reason for keeping log files? Though you do need the directory structure in place. Logging defaults to /opt/graphite/storage/logs. In here you have carbon-cache/ and webapp/ directories. The log directory for the webapp is set in the config- local_settings.py, whereas carbon uses carbon.conf. The configs are well documented so you can look into them for specific information.
Apart from configs that are generated during installation the only other 'file' crucial for the webapp to work is graphite.db in the /opt/graphtie/storage. It is used internally by the django webapp for housekeeping information such as user-auth etc. It gets generated by python manage.py --syncdb so i believe you can generate it again at the target system.