what's the docker pattern of serving both static and dynamic content - docker

I have a simple python/flask app. It's like this on the container
/var/www/app/
appl/
static/
...
app.py
wsgi.py
I used to let the nginx serve the static files directly before using docker. Like this:
location /static {
alias /var/www/www.domain.com/appl/static;
}
location / {
uwsgi_pass unix:///tmp/uwsgi/www.domain.com.sock;
include uwsgi_params;
}
But now the static files is inside the container and not accessible by nginx.
I can think of 2 possible solutions:
start a nginx inside the container as before, and let the host nginx to communicate with the container nginx using port like 8000
mount the (host)/var/www/www.domain.com/static to (container)/var/www/static and copy all static files in run.sh
What do the docker prefer?

I prefer the first solution because it stays in line with factor 7 of building a 12 factor app: exposing all services on a port. There's definitely some overhead with requests running through Nginx twice, but it probably won't be enough to worry about (if it is then just add more containers to your pool). Using a custom run script to do host-side work after your container starts will make it very difficult to scale your app using tools in the Docker ecosystem.

I do not like the first solution because running more than one service on one container is not a docker way.
Overall, we want to expose our static folder to nginx, then Volume is the best choice. But there are some different ways to do that.
as you mentioned, mount the (host)/var/www/www.domain.com/static to (container)/var/www/static and copy all static files in run.sh
using nginx cache, to let nginx cache static files for you.
for example we can write our conf like this, to let nginx solving static contents with 30min
-
proxy_cache_path /tmp/cache levels=1:2 keys_zone=cache:30m max_size=1G;
upstream app_upstream {
server app:5000;
}
location /static {
proxy_cache cache;
proxy_cache_valid 30m;
proxy_pass http://app_upstream;
}
trust uwsgi and using uwsgi to serving static contents. Serving static files with uWSGI

Related

Container based Nginx configuration

Seeking help from developers familiar with Wodby container management. The main objective is changing the MIME Types that are gzipped. I'm confused with the documentation for customizing my Nginx container. The documentation:
https://wodby.com/docs/1.0/stacks/drupal/containers/
suggests I copy "/etc/nginx/conf.d/vhost.conf", modify it, deploy it the repo and use an environment variable to include it. My problem is, even if I could find this file, which is not mounted on the server when created via Wodby, it does not appear that I'm actually able to change the MIME types or the default_type as they are already defined in the nginx.conf file.
I have also attempted to modify the Wodby stack to mount the /etc/ directory so that I could manually edit the nginx.conf file if I had to, but that only freezes the deployment.
Any help would be tremendously appreciated.
Two options
clone a repo https://github.com/wodby/nginx/, change the template file /templates/nginx.conf.tmpl as much as you need and build your own image. See Makefile (/Makefile) for the commands they use to build the image themselves. Use this image as the image for your nginx container from docker-compose.
Run a container with the default settings, shell into the container with docker-compose exec nginx sh and copy the nginx file from the container (use cat /etc/nginx/nginx.conf and copy it somewhere). Create a new file locally and mount it via the docker-compose.yml for the nginx container like
volumes:
- ./nginx-custom.conf:/etc/nginx/nginx.conf

Nginx static webpage and Docker with env variable url

I've got a nginx web server with a webpage inside that makes some http calls.
I want to dockerize all and use a parametric URL.
Is there a way to do this? Is it possible?
If I understood correctly you need a dynamic web page containing a script that makes HTTP calls to a configurable URL.
That can be achieved using Server Side Includes. with nginx. The webpage could include a configuration file that is created during the initialization of the container.
Create the file into the nginx document root when the image is first started. For example:
docker run -e URL=http://stackoverflow.com --entrypoint="/bin/sh" nginx -c 'echo ${URL} > /usr/share/nginx/html/url_config & exec nginx -g "daemon off;"'
For a real world scenario create a custom image based on nginx and override the entrypoint. The entrypoint creates the configuration file with the URL environment variable and eventually launches nginx in foreground.
Include the configuration file in the web page using the #include SSI directive
<script>
...
const http = new XMLHttpRequest();
const url = '<!--#include file="url_config" -->';
http.open("GET", url);
http.send();
...
Configure nginx to process SSI by adding the ssi on; directive
http {
ssi on;
...
}
Hope it helps.
Yes, it is possible,But you can do this using docker network if you want some routing. but you can do anything with Docker. According you question title you want to use ENV in URL in the Nginx Configuration. Here is the way to do that.
Set Environment in Docker run command or Dockerfile
Use that Environment variable in your Nginx config file
docker run --name nginx -e APP_HOST_NAME="myexample.com" -e APP_HOST_PORT="3000" yourimage_nginx
Now you can use these URL in your nginx configuration.
server {
set_by_lua $curr_server_name 'return os.getenv("APP_HOST_NAME")';
set_by_lua $curr_server_port 'return os.getenv("APP_HOST_PORT")';
proxy_pass http://$curr_server_name/index.html;
}
To deal with ENV with nginx you can check lua-nginx-module

Dockerfile COPY not overwriting Nginx configuration (or Nginx overwrites on container start) - why?

I have a Dockerfile I use for containerizing Python Flask-based microservices that's based on this base Docker image: https://github.com/tiangolo/uwsgi-nginx-flask-docker
In my Dockerfile, I add a custom nginx.conf and overwrite Nginx's:
FROM tiangolo/uwsgi-nginx-flask:python3.6
ADD nginx.conf nginx.conf
COPY ./app /app
COPY ./data /app/data
COPY nginx.conf /etc/nginx/conf.d/
My custom nginx.conf includes only one change - a single server_name that I prepare with a custom domain name:
server {
listen 80;
location / {
try_files $uri #app;
}
location #app {
include uwsgi_params;
uwsgi_pass unix:///tmp/uwsgi.sock;
}
location /static {
alias /app/static;
}
server_name my-fully-qualified-domain-name.com;
}
The reason for this is that I want to run Let's Encrypt's certbot utility to force Nginx to be SSL-only within the container.
The problem: Docker refuses to overwrite nginx.conf. It pretty much refuses to put anything I try into /etc/nginx/conf.d/.
Or maybe Docker does overwrite it, but something within Nginx on start (at container start) overwrites my changes. I haven't figured it out, but I'd really like to clobber that nginx.conf with my own changes.
Even attaching to the container and manually overwriting Nginx's configuration - then committing those changes to the container using docker commit fails. I suspect there's just something I'm not understanding about how Docker's COPY command works or how docker commit works - any thoughts/suggestions?
Note #1 - I have not been able to get a custom server_name field working with certbot using separate Nginx configuration files (per these instructions). The only way I've been able to get certbot to pick up the right server_name has been by clobbering & overwriting the default nginx.conf, hence going this approach. Perhaps I'm simply using custom Nginx configuration files incorrectly - any suggestions on that note would be greatly appreciated - but I had gone down that road before and was not successful.
Note #2 - I am able to run certbot on a running container (after attaching & overwriting Nginx's configuration), and that works great - SSL on my container, awesome - until the container stops and restarts. Then it's all wiped away and I need to overwrite Nginx's configuration & run certbot again - not ideal at all.
You shouldn't overwrite the default nginx.conf file (see https://github.com/tiangolo/uwsgi-nginx-flask-docker#customizing-nginx-configurations).
However you can still add your own configuration in a separate file within /etc/nginx/conf.d/, which should be enough for most use cases.
Edit:
If that doesn't work you can modify entrypoint.sh to better suit your needs since nginx.conf is set there. This issue contains a bit more info: https://github.com/tiangolo/uwsgi-nginx-flask-docker/issues/39

Have nginx set header on response according to environment var

I use a simple Nginx docker container to serve some static files. I deploy this container to different environments (e.g. staging, production etc) using Kubernetes which sets an environment variable on the container (let's say FOO_COUNT).
What's the easiest way of getting Nginx to pass the value of FOO_COUNT as a header on every response without having to build a different container for each environment?
Out-of-the-box, nginx doesn't support environment variables inside most configuration blocks. But, you can use envsubst as explained in the official nginx Docker image.
Just create a configuration template that's available inside your nginx container named, e.g. /etc/nginx/conf.d/mysite.template.
It contains your nginx configuration, e.g.:
location / {
add_header X-Foo-Header "${FOO_COUNT}";
}
Then override the Dockerfile command with
/bin/bash -c "envsubst < /etc/nginx/conf.d/mysite.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
If you're using a docker-compose.yml file, use command.
Kubernetes provides a similar way to override the default command in the image.
This was made possible in nginx 1.19+. This functionality is now builtin.
All you need to do is add your default.conf.template file (it looks for *.template suffix) to /etc/nginx/templates and the startup script will run envsubst on it, and output to /etc/nginx/conf.d.
Read more at the docs, see an example here https://devopsian.net/notes/docker-nginx-template-env-vars/

Simple docker containers: Build dedicated image or mount config as volume?

I'm putting together a docker-compose.yml file to run the multiple services for a project I'm working on. This project has a Magento and Wordpress website residing under the same domain, with that "same domain" aspect requiring a very simple nginx container to route requests to either service.
So I have this architected as 4 containers (visualisation):
A "magento" container, using an in-house project-specific image.
A "wordpress" container, using an in-house project-specific image.
A "db" container running mysql:5.6, with the init db dumps mounted at /docker-entrypoint-initdb.d.
A "router" container running nginx:alpine with a custom config mounted at /etc/nginx/nginx.conf. This functions as a reverse-proxy with two location directives set up. location / routes to "magento", and location /blog routes to "wordpress".
I want to keep things simple and avoid building unnecessary custom images, but in the context of the "router" I'm not sure what I'm doing is the best approach, or if that would be better off as a project-specific image.
I'm leaning toward my current approach of mounting a custom config into the nginx:alpine container, because the configuration is specific to the stack that's running – it wouldn't make sense as a single standalone container.
So the two methods, without a custom image we have the following in docker-compose.yml
router:
image: nginx:alpine
networks:
- projectnet
ports:
- "80:80"
volumes:
- "./router/nginx.conf:/etc/nginx/nginx.conf"
Otherwise, we have a Dockerfile containing the following, as I've seen suggested across the internet and in other StackOverflow responses.
FROM nginx:alpine
ADD nginx.conf /etc/nginx/
Does anybody have arguments for/against either approach?
If you 'bake in' the nginx config (your second approach)
ADD nginx.conf /etc/nginx/
it makes your docker containers more portable - i.e. they can be downloaded and run on any server capable of running docker and it will just work.
If you use option 1, mounting the config file at run time, then you are transferring one of your dependencies to outside of your container. This makes it a dependency that must be managed outside of docker.
In my opinion it is best to put as many dependencies inside your dockerfiles as possible because it makes them more portable and more automated (great for CI Pipelines for example)
There are reasons for mounting files at run time and these are usually centred around environmentally specific settings (although these can largely be overcome within docker too) or 'sensitive' files that application developers shouldn't or couldn't have access to. For example ssl certificates, database passwords, etc

Resources