Manage other container with nginx container - docker

How do I use the nginx in the container and access other container with setup config file?
I am a beginner for docker.
I try to learn how to use nginx manage my applications by docker containers.
I will use the "pgadmin" as an application in container for example.
Create & start containers. I try to use the [link] parameter to connect two containers.
sudo docker create -p 80:80 -p 443:443 --name Nginx nginx
sudo docker create -e PGADMIN_DEFAULT_EMAIL=houzeyu2683#gmail.com -e PGADMIN_DEFAULT_PASSWORD=20121006 -p 5001:80 --link Nginx:PSQLA --name PSQLA dpage/pgadmin4
sudo docker start Nginx
sudo docker start PSQLA
Go to Nginx bash and install nano edit.
sudo docker exec -it Nginx bash
apt update
apt install nano
Create and setup the nginx config file in admin.conf.
nano etc/nginx/conf.d/admin.conf
In the admin.conf is following blow.
{
listen 80;
server_name admin.my-domain-name;
location / {
proxy_pass http://PSQLA:80;
}
}
I get this error blow.
2020/10/17 01:57:16 [emerg] 333#333: host not found in upstream "PSQLA" in /etc/nginx/conf.d/admin.conf:5
nginx: [emerg] host not found in upstream "PSQLA" in /etc/nginx/conf.d/admin.conf:5
How do I use the nginx in the container and access other container with setup config file?

Try the following commands (in the same order) to launch the containers:
sudo docker create -e PGADMIN_DEFAULT_EMAIL=houzeyu2683#gmail.com -e PGADMIN_DEFAULT_PASSWORD=20121006 -p 5001:80 --name PSQLA dpage/pgadmin4
sudo docker create -p 80:80 -p 443:443 --link PSQLA:PSQLA --name Nginx nginx
sudo docker start PSQLA
sudo docker start Nginx
Now edit the Nginx configurations and you should not encounter the error anymore.
Tl;dr
As mentioned in the docker documentation:
When you set up a link, you create a conduit between a source container and a recipient container. The recipient can then access select data about the source.
In order to access PSQLA from Nginx container, we need to link Nginx container to PSQLA container and not the other way around.
Now the question is: What difference does that even makes?
For this we need to understand how --link option works in docker.
The docker adds a host entry for the source container to the /etc/hosts file
We can verify this in the /etc/hosts file inside the Nginx container. It contains a new entry something like this (The id and IP might be different in your case):
172.17.0.4 PSQLA 1117cf1e8a28
This entry makes Nginx container access PSQLA container using the container name.
Please refer this for better understanding:
https://docs.docker.com/network/links/#updating-the-etchosts-file
Important Note
As mentioned in the Docker documentation:
The --link flag is a legacy feature of Docker. It may eventually be removed. Unless you absolutely need to continue using it, we recommend that you use user-defined networks to facilitate communication between two containers instead of using --link.

Related

Docker Nginx: host not found in upstream

I have my docker app running in the aws EC2 instance, and I am currently trying to map the app to the external IP address using Nginx. Here is a snap shot of the containers that I have running:
My test-app is a fairly simple app that displays a static html website. And I deployed it using the following command:
docker run -d --name=test-app test-app
The nginx-proxy has the following proxy.conf
server {
listen 80;
location / {
proxy_pass http://test-app;
}
}
Here is the Dockerfile for the nginx proxy:
FROM nginx:alpine
RUN rm /etc/nginx/conf.d/*
COPY proxy.conf /etc/nginx/conf.d/
nginx-proxy is run using the following command:
docker run -d -p 80:80 --name=nginx-proxy nginx-proxy
However, the nginx container never runs, and here the error log I get
2020/03/27 15:55:19 [emerg] 1#1: host not found in upstream "test-app" in /etc/nginx/conf.d/proxy.conf:5
nginx: [emerg] host not found in upstream "test-app" in /etc/nginx/conf.d/proxy.conf:5
While your two containers are both running and you have properly exposed the ports required for the NGINX container. You have not exposed any ports for the test-app container. The NGINX container has no way of talking to it. Exposing ports directly with docker run would likely defeat the point of using a reverse proxy in your situation. So instead, what you should do in this situation is create a Network for both of your Docker containers and then add them to it. Then they will be able to communicate with one-another over a bridge. For example:
docker network create example
docker run -d --network=example --name=test-app test-app
docker run -d -p 80:80 --network=example --name=nginx-proxy nginx-proxy
Now that you have both of your pods on the same network, Docker will enable DNS-based service discovery between them by container name and you will be able to resolve them from each other. You can test connectivity like so: docker exec -it nginx-proxy ping test-app. Well, that is provided ping is installed in that Docker container.

I am trying to set up jenkins-docker-gitlab pipeline for laravel hello-world project using freestyle project

I have a web app running in a docker container at port 9000. I need to route the traffic to Nginx in another container in the same network to access it at port 80. How do I achieve this? I tried building an Nginx image and added Nginx.conf. But my Nginx container stops immediately after it runs.
contents of Nginx.conf file
Snippet of containers
You need to bind internal port form containers to the host like:
application
docker run -d \
--network=randon_name \
<image>
nginx
You need to bind internal port form containers to the host like:
docker run -d \
--network=randon_name \
-p 80:80 \ # <host>:<containerPort>
-p 443:443 \ # <host>:<containerPort>
<image>

How to access docker container at digitalocean?

When I run following commands, I can access 127.0.0.1:80 at localhost successfully.
docker run -p 127.0.0.1:80:80 --name Mynginx -dt nginx
docker exec -it Mynginx bash
But If I run the commands at digitalocean's DROPLETS, how to access it now? ( I tried to access DROPLETS's IP address:80, but I get nothing.)
You need to EXPOSE the port. See the documentation for more information on how.
Running from the command-line
If you run the containers from the command-line, you can map the ports with the -p tag. You can map multiple ports.
docker run -dt -p 80:80 --name Mynginx nginx
or
docker run -dt -p 80:80 -p 443:443 --name Mynginx nginx
Docker-compose
If you're using docker-compose, you can add the EXPOSE tag in your yaml file.
version: '2.3'
services:
my_container:
container_name: "Mynginx"
image: nginx:latest
expose:
- "80"
You need to update your droplets firewall settings to allow incoming connections to port :80. To update this select your droplet.
Then go to Networking -> Manage Firewalls -> Create Firewall
Then under Inbound Rules create a new HTTP rule by selecting HTTP from the dropdown menu. Scroll down and apply this firewall to your droplet, then you should be able to receive inbound traffic on port :80. You will have to add a similar rule for any other ports you want to open up.
See here for more details.

Make container accessible only from localhost

I have Docker engine installed on Debian Jessie and I am running there container with nginx in it. My "run" command looks like this:
docker run -p 1234:80 -d -v /var/www/:/usr/share/nginx/html nginx:1.9
It works fine, problem is that now content of this container is accessible via http://{server_ip}:1234. I want to run multiple containers (domains) on this server so I want to setup reverse proxies for them.
How can I make sure that container will be only accessible via reverse proxy and not directly from IP:port? Eg.:
http://{server_ip}:1234 # not found, connection refused, etc...
http://localhost:1234 # works fine
//EDIT: Just to be clear - I am not asking how to setup reverse proxy, but how to run Docker container to be accessible only from localhost.
Specify the required host IP in the port mapping
docker run -p 127.0.0.1:1234:80 -d -v /var/www/:/usr/share/nginx/html nginx:1.9
If you are doing a reverse proxy, you might want to put them all on a user defined network along with your reverse proxy, then everything is in a container and accessible on their internal network.
docker network create net
docker run -d --net=web -v /var/www/:/usr/share/nginx/html nginx:1.9
docker run -d -p 80:80 --net=web haproxy
Well, solution is pretty simple, you just have to specify 127.0.0.1 when mapping port:
docker run -p 127.0.0.1:1234:80 -d -v /var/www/:/usr/share/nginx/html nginx:1.9

How to routing http access to multiple docker container

How to routing http access for any domain to their specific docker container. So,
any request for:
web1.mydomain.com is for docker container with id asda912kas
web2.mydomain.com is for docker container with id: 8uada0a9sd
etc
Every docker container is running apache, mysql, and wordpress or other web apps. web1.mydomain.com and web2.mydomain.com is using same public IP Address (like apache vhost does)
[sorry for my poor english]
If your web containers are run on the same machine, you can use jwilder/nginx-proxy (https://github.com/jwilder/nginx-proxy)
You run it with port 80 mapped:
docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock jwilder/nginx-proxy
And then you run your web containers with environment variable VIRTUAL_HOST:
docker run -d -e VIRTUAL_HOST=web1.mydomain.com image1
docker run -d -e VIRTUAL_HOST=web2.mydomain.com image2
This works well for small deployments.

Resources