Docker image custom nginx.conf - docker

I am fairly new to this, I don't know if I am heading in the right direction or not. I have a custom nginx.conf that works fine, I am now trying to build a docker image with it so that I can run it as a container in kuberentes.
Here is my nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
stream {
#define the various upstreams
upstream app_1 {
server 192.168.122.206:5678;
}
upstream app_2 {
server 192.168.122.206:9000;
}
#map source port to upstream
map $remote_port $backend_svr {
1234 "app_1";
1235 "app_2";
}
#all udp traffic received on 8000, depending on source it will be redirected
server {
listen 8000 udp;
proxy_pass $backend_svr;
}
}
On my virtual machine I have /home/asvilla/docker-files/nginx-udp/Dockerfile which contains
FROM nginx
RUN chmod +w /etc/nginx/nginx.conf
COPY nginx.conf /etc/nginx/nginx.conf
RUN cat /etc/nginx/nginx.conf
I build it using
docker build -t "custom_nginx:dockerfile" .
The nginx container should redirect udp traffic incoming on port 8000 to either port 5678 or port 9000 depending on the source of the udp packet.
I run with docker run 'image-id' but it doesn't function as expected. Running docker ps shows "PORTS 80/tcp "
and "COMMAND nginx-g daemon of ..."
Any pointers on what these mean. nginx by default binds tcp to port 80 but I have changed the nginx.conf, the cat command I run shows that the file is updated.
I am assuming that I have to expose the ports in the nginx.conf somehow. Any help much appreciated.

If your end goal is to run this in Kubernetes, your easiest path will be to put this config file into a ConfigMap and just configure your Deployment to run the standard nginx image. (In plain Docker, you can use docker run -v to inject the config file into the container at runtime to similar effect.)
It doesn't really matter what port nginx listens on inside the container. If the stock nginx container expects to listen on the standard HTTP port 80 (and it looks like its Dockerfile has an EXPOSE 80 directive) then you can embrace that and listen 80 in your nginx config (over TCP, not UDP). Then in your Kubernetes deployment you can specify that as a container port, and if you want to map it to something else, you can do that in the Service that wraps this. (In plain Docker, if you want host port 8000 to avoid conflicting with other things, docker run -p8000:80.)
In terms of best practices I'd discourage directly writing IP addresses into config files. If it's a persistent server outside your cluster, you can set up a DNS server in your network to resolve its hostname, or get a cloud service like Amazon's Route 53 to do it for you. If it's in Kubernetes, use the service's DNS name, backend.default.svc.cluster.local. Even if you really have only an IP address, creating an ExternalName service will help you if the service ever moves.
Assuming you have the config file in a ConfigMap, your Deployment would look very much like the sample Deployment in the Kubernetes documentation (it even runs an nginx:1.7.9 container publishing port 80).

You must publish the port at runtime like this:
docker run -p 8000:8000 image-id.

In your Dockerfile you need to add EXPOSE
FROM nginx
RUN chmod +w /etc/nginx/nginx.conf
COPY nginx.conf /etc/nginx/nginx.conf
RUN cat /etc/nginx/nginx.conf
EXPOSE 8000
Then when you run it you execute: docker run -p 8000:8000 custom_nginx:dockerfile

Related

use nginx as reverse proxy for a docker container

I am new to using docker and nginx together so i apologize in advance for my simple question which I am unable to get an answer to despite going through many resources on youtube.
I created an ubuntu server and ran the following command
sudo apt install nginx
Now I have a very simple flask application docker image(publicly available on docker hub and not developed by me) and I want to configure my nginx to work as a reverse proxy to my container running the said image.
My code for the reverse proxy in nginx configuration is as follows:
server{
listen 80;
location / {
proxy_pass "http://192.168.0.20:8000"
}
}
192.168.0.20 is my server ip and 8000 is the host port over which i am forwarding my docker container like
docker container run -p 8000:5000 <image>
But when I run http://192.168.0.20/ it opens default nginx index.html whereas I want it to forward to my app container to serve that static file because when i run http://192.168.0.20:8000/ it gives me desired output.
This might sound like a dumb question but i have been struggling to get a hang of nginx.
Thanks in advance for the help !!!
To reach the host from inside the container, you can't use 192.168.0.20, since that address isn't known inside the container. Depending on your host OS, you can use 172.17.0.1 (Linux) or host.docker.internal (Windows). If your OS is Linux, you should change your config to
server {
listen 80;
location / {
proxy_pass "http://172.17.0.1:8000/"
}
}
Rather than installing nginx yourself, you can use the nginx images that are available on docker hub. To get your config file into it, you copy it to /etc/nginx/conf.d/default.conf. Create a Dockerfile containing
FROM nginx
COPY nginx.conf /etc/nginx/conf.d/default.conf
Build and run it with
docker build -t mynginx .
docker run -d -p 80:80 mynginx
Now you should be able to go to http://192.168.0.20/ and get a response from the Flask app.

Docker expose port internals

In Docker we all know how to expose ports, EXPOSE instruction publishes and -p or -P option to expose during runtime. When we use "docker inspect" or "docker port" to see the port mappings and these configs output are pulled /var/lib/docker/containers/container-id/config.v2.json.
The question I got is when we expose port how does Docker actually changes the port in container, say the Apache or Nginx, say we can have the installation anywhere in the OS or file path, how does Docker finds the correct conf file(Apache /etc/httpd/conf/httpd.conf) to change if I suppose Docker does this on the line "Listen 80" or Listen "443" in the httpd.conf file. Or my whole understanding of Docker is in stake:)
Any help is appreciated.
"docker" does not change anything in the internal configuation of the container (or the services it provides).
There are three different points where you can configure ports
the service itself (for instance nginx) inside the image/container
EXPOSE xxxx in the Dockerfile (ie at build time of the image)
docker run -p 80:80 (or the respective equivalent for docker compose) (ie at the runtime of the container)
All three are (in principle) independent of each other. Ie, you can have completely different values in each of them. But in practice, you will have to adjust them to each other to get a working system.
We know, EXPOSE xxxx in the dockerfile doesn't actually publish any port at runtime, but just tells the docker service, that that specific container will listen to port xxxx at runtime. You can see this as sort of documentation for that image. So it's your responsibility as creator of the Dockerfile to provide the correct value here. Because anyone using that image, will probaby rely on that value.
But regardless, of what port you have EXPOSEd (or not, EXPOSE is completely optional) you still have to publish that port when you run the container (for instance when using docker run via -p aaaa:xxxx).
Now let us assume you have an nginx image which has the nginx service configured to listen to port 8000. Regardless of what you define with EXPOSE or -p aaaa:xxxx, that nginx service will always listen to port 8000 only and nothing else.
So if you now run your container with docker run -p 80:80, the runtime will bind port 80 of the host to port 80 of the container. But as there is no service listening on port 80 within the container, you simply won't be able to contact your nginx service on port 80. And you also won't be able to connect to nginx on port 8000, because it hasn't been published.
So in a typical setup, if your service in the container is configured to listen to port 8000, you should also EXPOSE 8000 in your dockerfile and use docker run -p aaaa:8000 to bind port aaaa of your host machine to port 8000 of your container, so that you will be able to connect to the nginx service via http://hostmachine:aaaa

Docker, nginx, nginx proxy not redirecting to the right container

I'm setting up some containers in my Ubuntu server. I've created two simple images this way:
Dockerfile: static-site
FROM nginx:alpine
COPY ./site/ /usr/share/nginx/html
Dockerfile: static-content
FROM nginx:alpine
COPY ./assets/ /usr/share/nginx/html
The Dockerfiles are in different location
Until here no problem at all. I've installed nginx-proxy and used the VIRTUAL_HOST to run them:
docker run -d -p 80 -e VIRTUAL_HOST=mysite.com static-site
docker run -d -p 80 -e VIRTUAL_HOST=static.mysite.com static-content
The result is that whatever address I put in the browser it always redirect me to mysite.com.
What am I doing wrong?
Also, I have a DNS record like this:
*.mysite.com 86400 A 0 xxx.xxx.xx.xx (the ip of mysite.com)
Could it be the problem?
you cant bind two containers to the same port ("80"). Most probably that the second container is dead (you can verify this by running docker ps). or it is running with automatically assigned ports
docker ps --format " {{.Image}} ==> {{.Ports}}"
nginx ==> 0.0.0.0:32769->80/tcp
nginx ==> 0.0.0.0:32768->80/tcp
To fix this issue you either use different ports for the containers and configure your DNS to be linked to a load balancer (so you can configure the destination port) or you switch to use single Nginx with multiple server definitions.
Dockerfile:
FROM nginx:alpine
COPY ./site/ /usr/share/nginx/site_html
COPY ./assets/ /usr/share/nginx/assets_html
COPY ./site.conf /etc/nginx/conf.d/default.conf
Nginx Config:
server {
listen 80;
server_name mysite.com;
root /usr/share/nginx/site_html;
}
server {
listen 80;
server_name static.mysite.com;
root /usr/share/nginx/static_html;
}

Reverse Proxy with NGINX and Docker

I have an application running on http://localhost:8000 using a docker image that was made by me.
Now I want to use NGINX as reverse proxy listening on port 80, to redirect to the localhost:8000.
Here my nginx.conf file
#user nobody;
worker_processes 1;
events {
worker_connections 1024;
}
http {
server {
listen 80;
location / {
proxy_pass http://localhost:8000;
}
}
}
Here my Dockerfile:
FROM nginx
RUN rm /etc/nginx/conf.d/default.conf
COPY index.html /usr/share/nginx/html
COPY nginx.conf /etc/nginx
CMD nginx
To build the image I use the command
docker build --no-cache -t mynginx .
To run it, I use
docker run -p 80:80 -d mynginx
Now, if I test from my local computer with curl localhost:8000 everything works, but if I try with curl localhost, I get a Bad Gateway error.
Moreover, I tried to serve static content and it works, but with the reverse proxy settings it does not work.
The reason you are getting a bad gateway is that from within your nginx container, localhost resolves to the container itself and not to the host system.
If you want to access your application container from your reverse proxy container, you need to put both containers into a network.
docker network create my-network
docker network connect --alias application my-network <application container id/name>
docker network connect --alias reverse-proxy my-network <reverse proxy container id/name>
--network can be an arbitrary name and --alias should be the hostnames you want to resolve to your containers.
In fact you do not need to provide an alias if you already assigned a (host)name (docker run --name ...) to your containers.
You can then change your proxy_pass directive to the (alias) name of your application container:
proxy_pass http://application:8000;
Also refer to the documentation: Container networking
curl localhost:8000 works => your application is running
curl localhost returns bad gateway means proxy_pass http://localhost:8000; does not work. This is because "localhost" is relative to its caller, in this case the nginx container, which does not have anything running on port 8000.
You need to point it to your application using proxy_pass http://your_app:8000; where "your_app" is the service name (docker-compose) or the container name or its network alias. Make sure they are in the same network.

Nginx docker container proxy pass to another port

I want to run Nginx in a docker container, it listens to port 80 and I want it to proxy_pass to port 8080 when the url starts with word api, and I have some web app listening port 8080. This has been working for me without docker, but with docker, I couldn't get it to work.
My nginx.conf is like:
location /{
# serve static page
}
location /api {
proxy_pass http://0.0.0.0:8080;
}
I run my nginx container with docker run -d -p 80:80 -p 8080: 8080 nginx
My problem is now I can no longer run my web app because it can't listen to port 8080 since that container is already listening to it.
docker run -d --net host nginx
Try it!
Nginx container will share the host network with IP and all ports
First, you need to create a network to place both containers:
docker network create nginx_network
Then, you should specify Docker's DNS server in nginx configuration:
location /api {
#Docker DNS
resolver 127.0.0.11;
#my_api - name of container with your API, see below
proxy_pass http://my_api:8080;
}
Finally, run your containers:
docker run --network="nginx_network" -d --name my_api your_api_container
docker run --network="nginx_network" -d -p 80:80 nginx
Note:
--name parameter's value for API's container must match domain name in Nginx config
it's enough to specify only 80 port for your nginx container
first run your API's container and then Nginx's container (see below)
both containers must be in the same network
This should work.
In case if you run nginx container first, then nginx will try to resolve domain name my_api on startup and fail, because container with this name doesn't exist yet. In this case there is a following workaround (not sure if it is good solution). Modify nginx config:
location /api {
#Docker DNS
resolver 127.0.0.11;
#hack to prevent nginx to resolve domain on start up
set $docker_host "my_api";
#my_api - name of container with your API, see below
proxy_pass http://$docker_host:8080;
}
You can (or rather should) only have one process per docker container which means you will have nginx running in one container and your application on another. The old way is to create links between containers like this:
$ docker run --name my-app -d myself/myapp
$ docker run --name proxy --link my-app:my-app -d nginx
This will add a line in /etc/hosts in the nginx container so it will be able to call the other container by it's name.
And then in nginx.conf file:
location /api {
proxy_pass http://my-app:8080;
}
However according to official Docker docs this method is deprecated and you should only use it's "absolutely needed". Instead you should use the docker networking. Theoretically if both containers are in the same network and the local DNS server is working (embedded in docker) they should be able to see each other without the --link parameter. Unfortunately it didn't work for me for some reason. Nginx didn't have the correct DNS configured in /etc/resolv.conf, but read the article and play around it, I'm sure it will work.

Resources