Nginx static webpage and Docker with env variable url - docker

I've got a nginx web server with a webpage inside that makes some http calls.
I want to dockerize all and use a parametric URL.
Is there a way to do this? Is it possible?

If I understood correctly you need a dynamic web page containing a script that makes HTTP calls to a configurable URL.
That can be achieved using Server Side Includes. with nginx. The webpage could include a configuration file that is created during the initialization of the container.
Create the file into the nginx document root when the image is first started. For example:
docker run -e URL=http://stackoverflow.com --entrypoint="/bin/sh" nginx -c 'echo ${URL} > /usr/share/nginx/html/url_config & exec nginx -g "daemon off;"'
For a real world scenario create a custom image based on nginx and override the entrypoint. The entrypoint creates the configuration file with the URL environment variable and eventually launches nginx in foreground.
Include the configuration file in the web page using the #include SSI directive
<script>
...
const http = new XMLHttpRequest();
const url = '<!--#include file="url_config" -->';
http.open("GET", url);
http.send();
...
Configure nginx to process SSI by adding the ssi on; directive
http {
ssi on;
...
}
Hope it helps.

Yes, it is possible,But you can do this using docker network if you want some routing. but you can do anything with Docker. According you question title you want to use ENV in URL in the Nginx Configuration. Here is the way to do that.
Set Environment in Docker run command or Dockerfile
Use that Environment variable in your Nginx config file
docker run --name nginx -e APP_HOST_NAME="myexample.com" -e APP_HOST_PORT="3000" yourimage_nginx
Now you can use these URL in your nginx configuration.
server {
set_by_lua $curr_server_name 'return os.getenv("APP_HOST_NAME")';
set_by_lua $curr_server_port 'return os.getenv("APP_HOST_PORT")';
proxy_pass http://$curr_server_name/index.html;
}
To deal with ENV with nginx you can check lua-nginx-module

Related

How to use SSL key and cert files in docker?

I have FastAPI app with uvicorn as starter. This app is in docker container and works fin.
Now I want to run uvicorn with SLL so i need PEM- and KEY-files inside of container. On dev machine theese files are /ssl/localhost.pem and /ssl/localhost.key, On prod server they are /ssl/certs/prod.pem and /ssl/private/prod.key.
My idea was to define SSL_KEY and SSL_PEM env vars with system path to that files and use it in Dockerfile:
RUN cp SSL_KEY /ssl.key
...
CMD uvicorn ... --ssl-keyfile=/ssl.key ...
But it doesnt work I do not know why.
Do you have any ideas about how to implement this case? Please :-)

Passing environment variables at runtime to Vue.js application with docker-compose

This is my second post about this particular issue. I've since deleted that question because I've found a better way to explain what exactly I'd like to do.
Essentially, I'd like to pass command line arguments to docker-compose up and set them as environment variables in my Vue.js web application. The goal is to be able to change the environment variables without rebuilding the container every time.
I'm running into several issues with this. Here are my docker files:
Dockerfile for Vue.js application.
FROM node:latest as build-stage
WORKDIR /app
# Environment variable.
ENV VUE_APP_FOO=FOO
COPY package*.json ./
RUN npm install
COPY ./ .
RUN npm run build
FROM nginx as production-stage
RUN mkdir /app
COPY --from=build-stage /app/dist /app
COPY nginx.conf /etc/nginx/nginx.conf
VUE_APP_FOO is stored and accessible via Node's process.env objected and seems to be passed in at build time.
And my docker-compose.yml:
version: '3.5'
services:
ms-sql-server:
image: mcr.microsoft.com/mssql/server:2017-latest-ubuntu
ports:
- "1430:1433"
api:
image: # omitted (pulled from url)
restart: always
depends_on:
- ms-sql-server
environment:
DBServer: "ms-sql-server"
ports:
- "50726:80"
client:
image: # omitted(pulled from url)
restart: always
environment:
- VUE_APP_BAR="BAR"
depends_on:
- api
ports:
- "8080:80"
When I ssh into the client container with docker exec -it <container_name> /bin/bash, the VUE_APP_BAR variable is present with the value "BAR". But the variable is not stored in the process.env object in my Vue application. It seems like something odd is happening with Node and it's environmental variables. It's like it's ignoring the container environment.
Is there anyway for me to access the container level variables set in docker-compose.yml inside my Vue.js application? Furthermore, is there anyway to pass those variables as arguments with docker-compose up? Let me know if you need any clarification/more information.
So I figured out how to do this in sort of a hacky way that works perfectly for my use case. A quick review of what I wanted to do: Be able to pass in environment variables via a docker-compose file to a Vue.js application to allow different team members to test different development APIs depending on their assignment(localhost if running the server locally, api-dev, api-staging, api-prod).
The first step is to declare your variables in a JS file inside your VueJS project (it can be defined anywhere) formatted like this:
export const serverURL = 'VUE_APP_SERVER_URL'
Quick note about the value of this string: it has to be completely unique to your entire project. If there is any other string or variable name in your application that matches it, it will get replaced with the docker environment variable we pass using this method.
Next we have to go over to our docker-compose.yml and declare our environment variable there:
docker-compose.yml
your_vuejs_client:
build: /path/to/vuejs-app
restart: always
environment:
VUE_APP_SERVER_URL: ${SERVER_URL}
ports:
- "8080:80"
Now when you run docker-compose up in your terminal, you should see something like this:
WARNING: The SERVER_URL variable is not set. Defaulting to a blank string.
After we have our docker-compose setup properly, we need to create a entrypoint script in the VueJS application to run before the app is launched via nginx. To do this, navigate back to your VueJS directory and run touch entrypoint.sh to create a blank bash script. Open that up and this is what I have in mine:
entrypoint.sh
#!/bin/sh
ROOT_DIR=/usr/share/nginx/html
echo "Replacing env constants in JS"
for file in $ROOT_DIR/js/app.*.js* $ROOT_DIR/index.html $ROOT_DIR/precache-manifest*.js;
do
echo "Processing $file ...";
sed -i 's|VUE_APP_SERVER_URL|'${VUE_APP_SERVER_URL}'|g' $file
done
sed -i 's|VUE_APP_SERVER_URL|'${VUE_APP_SERVER_URL}'|g' $file This line transverses your entire application for the string 'VUE_APP_SERVER_URL' and replaces it with the environment variable from docker-compose.yml.
Finally we need to add some lines to our VueJS application Dockerfile to tell it to run the entrypoint script we just created before nginx is started. So right before the CMD ["nginx", "-g", "daemon off;"] line in your Dockerfile, add the lines below:
VueJS Dockerfile
# Copy entrypoint script as /entrypoint.sh
COPY ./entrypoint.sh /entrypoint.sh
# Grant Linux permissions and run entrypoint script
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
After that, running docker-compose run -e SERVER_URL=yourserverapi.com/api, the serverURL constant we set in a JS file at the beginning will be replaced with whatever you supply in the docker-compose command. This was a pain to finally get working, but I hope this helps out anyone facing similar troubles. The great thing is that you can add as many environment variables as you want, just add more lines to the entrypoint.sh file and define them in the Vue.js application and your docker-compose file. Some of the ones I've used are providing a different endpoint for the USPS API depending on whether you're running locally or hosted in the cloud, providing different Maps API keys based on whether the instance is running in production or development, etc. etc.
I really hope this helps someone out, let me know if anyone has any questions and I can hopefully be of some help.
The client app runs on a web browser, but environment variables on are on the server. The client needs a way to obtain the environment variable value from the server.
To accomplish that, you have several options, including:
Leverage Nginx to serve the environment variable itself for this using an approach like one of these: nginx: use environment variables. This approach may be quick, more dynamic or more static depending on your needs, maybe less formal and elegant. Or
Implement a server API (Node.js?) that reads the environment variable and returns it to the client over an AJAX call. This approach is elegant, dynamic, API-centric. Or
Lastly if the environment variable is static per nginx instance per deployment, you could build the static assets of the Vue app during deployment and hard-code the environment variable right there in the static assets. This approach is somewhat elegant but does pollute client code with server details and is somewhat static (can only change on deployment).
As i posted here https://stackoverflow.com/a/63097312/4905563, I have developed a package that could help.
Try with npm install jvjr-docker-env take a look to README.md to see some examples of use.
Even though the question title says how to consume environment variables from vue.js side, the questioner's goal is to be able to configure backend api endpoint dynamically without rebuilding docker image.
I achieved it by using reverse proxy.
For dev run, configure reverse proxy on vue.config.js, which is consumed by vue-cli web server.
For nginx run, configure reverse proxy on nginx.conf. You can use nginx template to read environment variables.
This approach also eliminates the need for CORS configuration on web-api server side, since web api is called from vue app's web server, not from the browser directly.
More thorough working sample can be found on this commit.
vue.config.js:
module.exports = {
devServer: {
proxy: {
'/api': {
target: 'http://host.docker.internal:5000',
},
},
},
};
nginx.conf:
...
http {
...
include /etc/nginx/conf.d/*.conf;
}
nginx.default.conf.template:
server {
listen 80;
...
location /api {
proxy_pass ${WEBAPI_ENDPOINT};
}
}
Dockerfile:
...
COPY nginx.conf /etc/nginx/nginx.conf
COPY nginx.default.conf.template /etc/nginx/templates/default.conf.template
Add a static config.js script in index.html. This file is not processed by Webpack, but included verbatim. Use your docker-compose file or kubernetes manifest or AWS ECS task config or similar to override that file at run time.
For example, in my Vue project:
public/config.js
// Run-time configuration. Override this in e.g. your Dockerfile, kubernetes pod or AWS ECS Task.
// Use only very simple, browser-compatible JS.
window.voxVivaConfig = {};
public/index.html
<!DOCTYPE html>
<html lang="en">
<head>
<!-- ... -->
<!-- Allow injection of run-time config -->
<script async src="config.js"></script>
<!-- ... -->
</head>
<body>
<div id="app" aria-busy="true">
<noscript><p>This app requires JavaScript.</p></noscript>
</div>
</body>
</html>
src/config.js
function getRunTimeConfig() {
if (typeof window.voxVivaConfig === "object") {
return window.voxVivaConfig || {};
}
return {};
}
export default Object.freeze({
appTitle: "My App",
backEndBaseUrl: process.env.VUE_APP_BACK_END_BASEURL || "https://example.com",
whatever: process.env.VUE_APP_WHATEVER || "",
/**
* Allow config specified at run time to override everything above.
*/
...getRunTimeConfig(),
});
Advantages
This puts all config in one place, and lets you choose which config values should be specified at compile time, build time or run time, as you see fit.

Have nginx set header on response according to environment var

I use a simple Nginx docker container to serve some static files. I deploy this container to different environments (e.g. staging, production etc) using Kubernetes which sets an environment variable on the container (let's say FOO_COUNT).
What's the easiest way of getting Nginx to pass the value of FOO_COUNT as a header on every response without having to build a different container for each environment?
Out-of-the-box, nginx doesn't support environment variables inside most configuration blocks. But, you can use envsubst as explained in the official nginx Docker image.
Just create a configuration template that's available inside your nginx container named, e.g. /etc/nginx/conf.d/mysite.template.
It contains your nginx configuration, e.g.:
location / {
add_header X-Foo-Header "${FOO_COUNT}";
}
Then override the Dockerfile command with
/bin/bash -c "envsubst < /etc/nginx/conf.d/mysite.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
If you're using a docker-compose.yml file, use command.
Kubernetes provides a similar way to override the default command in the image.
This was made possible in nginx 1.19+. This functionality is now builtin.
All you need to do is add your default.conf.template file (it looks for *.template suffix) to /etc/nginx/templates and the startup script will run envsubst on it, and output to /etc/nginx/conf.d.
Read more at the docs, see an example here https://devopsian.net/notes/docker-nginx-template-env-vars/

How to setup mass dynamic virtual hosts in nginx on docker?

How can I setup mass dynamic virtual hosts in nginx As seen here
except using docker as the host machine?
I currently have it setup like this:
# default.conf
server {
root /var/www/html/$http_host;
server_name $http_host;
}
And in my Dockerfile
COPY default.conf /etc/nginx/sites-enabled/default.conf
And after I build the image and run it:
docker run -d 80:80 -v www/:/var/www/html
But when I point a new domain (example.dev) in my hosts file and make a www/example.dev/index.html. It doesn't work at all.
The setup is correct and it works as i tested on my system. The only issue is that you are copying the file on a wrong path. The docker image doesn't use the sites-enabled path by default. The default config loads everything present in /etc/nginx/conf.d. So you need to copy to that path and rest all works great
COPY default.conf /etc/nginx/conf.d/default.conf
Make sure to map you volumes correctly. While testing I tested it using below docker command
docker run -p 80:80 -v $PWD/www/:/var/www/html -v $PWD/default.conf:/etc/nginx/conf.d/default.conf nginx
Below is the output on command line
vagrant#vagrant:~/test/www$ mkdir dev.tarunlalwani.com
vagrant#vagrant:~/test/www$ cd dev.tarunlalwani.com/
vagrant#vagrant:~/test/www/dev.tarunlalwani.com$ vim index.html
vagrant#vagrant:~/test/www/dev.tarunlalwani.com$ cat index.html
<h1>This is a test</h1>
Output on browser

what's the docker pattern of serving both static and dynamic content

I have a simple python/flask app. It's like this on the container
/var/www/app/
appl/
static/
...
app.py
wsgi.py
I used to let the nginx serve the static files directly before using docker. Like this:
location /static {
alias /var/www/www.domain.com/appl/static;
}
location / {
uwsgi_pass unix:///tmp/uwsgi/www.domain.com.sock;
include uwsgi_params;
}
But now the static files is inside the container and not accessible by nginx.
I can think of 2 possible solutions:
start a nginx inside the container as before, and let the host nginx to communicate with the container nginx using port like 8000
mount the (host)/var/www/www.domain.com/static to (container)/var/www/static and copy all static files in run.sh
What do the docker prefer?
I prefer the first solution because it stays in line with factor 7 of building a 12 factor app: exposing all services on a port. There's definitely some overhead with requests running through Nginx twice, but it probably won't be enough to worry about (if it is then just add more containers to your pool). Using a custom run script to do host-side work after your container starts will make it very difficult to scale your app using tools in the Docker ecosystem.
I do not like the first solution because running more than one service on one container is not a docker way.
Overall, we want to expose our static folder to nginx, then Volume is the best choice. But there are some different ways to do that.
as you mentioned, mount the (host)/var/www/www.domain.com/static to (container)/var/www/static and copy all static files in run.sh
using nginx cache, to let nginx cache static files for you.
for example we can write our conf like this, to let nginx solving static contents with 30min
-
proxy_cache_path /tmp/cache levels=1:2 keys_zone=cache:30m max_size=1G;
upstream app_upstream {
server app:5000;
}
location /static {
proxy_cache cache;
proxy_cache_valid 30m;
proxy_pass http://app_upstream;
}
trust uwsgi and using uwsgi to serving static contents. Serving static files with uWSGI

Resources