Deploy static website on port 5000 with docker and nginx - docker

I try to deploy a simple static index.html and style.css website with docker.
server {
#root /var/www/html;
# Add index.php to the list if you are using PHP
#index index.html index.htm index.nginx-debian.html;
server_name data-mastery.com; # managed by Certbot
location /blog {
proxy_pass http://127.0.0.1:5000;
}
listen 443 ssl; # managed by Certbot
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_certificate /etc/letsencrypt/live/data-mastery.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/data-mastery.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = data-mastery.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80 ;
listen [::]:80 ;
server_name data-mastery.com;
return 404; # managed by Certbot
}
This is my simple Dockerfile:
FROM nginx:alpine
COPY . /usr/share/nginx/html
I started the container with the following command:
sudo docker run -p 5000:5000 blog
I am not sure that this line means when I run docker ps:
80/tcp, 0.0.0.0:5000->5000/tcp
Is everything correct here or not?
How can I make it running on port 5000?
Thank you!

Updated answer:
Your use case is addressed in nginx docker image documentation, so I'll paraphrase it.
$ docker run --name my-blog -v /path/to/your/static/content/:/usr/share/nginx/html:ro -d nginx
Alternatively, a simple Dockerfile can be used to generate a new image that includes the necessary content (which is a much cleaner solution than the bind mount above):
FROM nginx
COPY . /usr/share/nginx/html
Place this file in the same directory as your directory of content , run docker build -t my-blog ., then start your container:
$ docker run --name my-blog -d my-blog
Exposing external port
$ docker run --name some-nginx -d -p 8080:80 some-content-nginx
Then you can hit http://localhost:8080 or http://host-ip:8080 in your browser.
Initial answer:
You should try something like:
sudo docker run -p 80:5000 blog
docker will proxy connection to localhost:5000 to your_container:80
on your_container:80 nginx is listening and will proxy to your_container:500 where your blog is answering
I hope that helps

Related

failed (111: Connection refused) while connecting to upstream -- nginx/gunicorn connection via 127.0.0.1 [duplicate]

This question already has answers here:
Docker-compose/Nginx/Gunicorn - Nginx not working
(1 answer)
How to communicate between Docker containers via "hostname"
(5 answers)
Closed 4 months ago.
Below are the important details:
Dockerfile for nginx build
FROM nginx:latest
EXPOSE 443
COPY nginx.conf /etc/nginx/nginx.conf
Nginx.conf
events {}
http {
client_max_body_size 1000M;
server {
server_name _;
location / {
proxy_pass http://127.0.0.1:8000/;
proxy_set_header Host $host;
}
listen 443 ssl;
ssl_certificate cert/name.crt;
ssl_certificate_key cert/name.key;
}
}
Nginx docker command
docker run -dit -p 0.0.0.0:443:443 -v /etc/cert/:/etc/nginx/cert <MY NGINX CONTAINER> nginx -g 'daemon off;'
Docker command to start gunicorn server
docker run -dit -p 127.0.0.1:8000:8000 <My FASTAPI CONTAINER> gunicorn -w 3 -k uvicorn.workers.UvicornWorker -b 127.0.0.1:8000 server:app
Other details:
I expose port 8000 in my Fastapi container docker build
I run nginx docker command right before the gunicorn docker command
I am currently testing with python requests library and have turned verify=False for the SSL configuration
Edit:
My issue related most directly to this post:
From inside of a Docker container, how do I connect to the localhost of the machine?
Binding to 0.0.0.0:8000 for my gunicorn run and adding the tag --network="host" to my docker run nginx command solved my issue

Running Nginx Docker with SSL self signed certificate

I am trying to run a UI application with Docker using nginx image I am able to access the service on port 80 without any problem but whenever I am trying access it via https on 443 port I am not able to access the applications the site keeps loading and eventually results in not accessible I have updated the nginx.conf file in default.conf to allow access over port 443
Following is my nginx.conf
charset utf-8;
server {
listen 80;
server_name localhost;
root /usr/nginx/html;
}
server {
listen 443;
server_name localhost;
root /usr/nginx/html;
}
I have added the SSL self-signed certificate in the /usr/nginx folder and exposed port 443 via Dockerfile
The following is my Dockerfile
FROM nginx
COPY dist /usr/nginx/html
RUN chmod -R 777 /usr/nginx/html/*
COPY nginx.conf /etc/nginx/conf.d/default.conf
COPY domain.crt /usr/nginx
EXPOSE 80:443
ENTRYPOINT nginx -g 'daemon off;'
Can anyone please explain me is port 443 not allowing any access
For nginx server to allow SSL encryption you need to provide ssl flag while listening in nginx.conf
and only ssl certificate will not be sufficient, you will need the ssl certificate key and password as well and they must be configured.
charset utf-8;
server {
listen 80;
server_name localhost;
root /usr/share/nginx/html;
}
server {
listen 443 ssl;
ssl_certificate /usr/nginx/ssl.crt;
ssl_certificate_key /usr/nginx/ssl.key;
ssl_password_file /usr/nginx/ssl.pass;
server_name localhost;
root /usr/nginx/html;
}
And you need to put the ssl certificate, key and password via volumes or via embedding in docker container. If you are running container over kubernetes cluster, adding them via kubernetes secrets will be better option.
For Dockerfile you can add like
FROM nginx
COPY dist /usr/nginx/html
RUN chmod -R 777 /usr/nginx/html/*
COPY nginx.conf /etc/nginx/conf.d/default.conf
COPY ssl.crt /usr/nginx/
COPY ssl.pass /usr/nginx/
COPY ssl.key /usr/nginx/
EXPOSE 80:443
ENTRYPOINT nginx -g 'daemon off;'
For further info you can refer the Nginx Docker article https://medium.com/#agusnavce/nginx-server-with-ssl-certificates-with-lets-encrypt-in-docker-670caefc2e31

Nginx Reverse Proxy To Docker Container Web Apps Giving 404

I just made a fresh Ubuntu desktop vm, threw docker on it, threw Nginx on it, and pulled and ran the container yeasy/simple-web:latest, and ran it twice with the commands
docker run --rm -it -p 8000:80 yeasy/simple-web:latest
docker run --rm -it -p 8001:80 yeasy/simple-web:latest
I went over to /etc/nginx/sites-available and created a new file localhost.conf with the contents
server {
listen 80;
location /chad {
proxy_pass http://127.0.0.1:8000/;
}
location /brock {
proxy_pass http://127.0.0.1:8081/;
}
}
I then created a symlink of the localhost.conf file at /etc/nginx/sites-enabled with the command
ln -s ../sites-available/localhost.conf .
This was all done as root.
When I curl localhost:8000 and localhost:8001 I get the correct webpage hosted in the docker container. When I curl localhost/chad or localhost/brock, I get an Nginx 404 error. I have not touched the default config for Nginx, and did not modify the Docker images
I am limited to using docker images and Nginx, so I cannot change technology stacks.
Not sure if you're already doing this but it's worth mentioning:
You need to reload or restart Nginx whenever you make changes to its configuration.
To reload Nginx, use one of the following commands:
sudo systemctl reload nginx
sudo service nginx reload
I ended up being able to host both my docker containers with Nginx on the host machine with the following config following the above instructions.
server {
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
listen 127.0.0.1;
server_name localhost;
location / {
try_files $uri $uri/ =404;
}
location /chad {
proxy_pass http://127.0.0.1:8000/;
}
location /brock {
proxy_pass http://127.0.0.1:8001/;
}
}

configire HTTPS on nginx docker container which hosts angular app

I have my docker file which deploys angular app on nginx docker container. I need to make it work with https and I am creating the default-ssl.conf and copying it to the container. Currently I want it to work with localhost and we do not have any domain. Please advise if there is any way to have https working with localhost to use the deployed app on nginx docker container. Thanks!
1st Dockerfile:
FROM nginx
COPY /meg /usr/share/nginx/html
ADD server.crt /etc/nginx/certs/
ADD server.key /etc/nginx/certs/
COPY default-ssl.conf /etc/nginx/conf.d/default-ssl.conf
COPY nginx.conf /etc/nginx/conf.d/nginx.conf
RUN ls /etc/nginx/certs/
COPY /Home /usr/share/nginx/html
2nd Dockerfile:
FROM nginx
ADD server.crt /etc/nginx/certs/
ADD server.key /etc/nginx/certs/
COPY nginx.conf /etc/nginx/conf.d/nginx.conf
RUN ls /etc/nginx/certs/
nginx.conf
server {
listen 443 ssl;
server_name localhost;
ssl_certificate /etc/nginx/certs/server.crt;
ssl_certificate_key /etc/nginx/certs/server.key;
location / {
proxy_pass http://localhost:8081/;
error_log /var/log/front_end_errors.log;
}
}
Two containers will be built using above docker files.

Gitlab links to "https://gitlab/"

I installed gitlab in a docker container from the official image gitlab/gitlab-ce:latest. This image has all config in the file gitlab.rb. Https is done by a nginx reverse proxy.
My problem is, that when gitlab has an absolute link to itself, it links always to https://gitlab/. This host also can be seen in the "New group" dialog:
Docker call:
docker run \
--name git \
--net mydockernet \
--ip 172.18.0.2 \
--hostname git.mydomain.com \
--restart=always \
-p 766:22 \
-v /docker/gitlab/config:/etc/gitlab \
-v /docker/gitlab/logs:/var/log/gitlab \
-v /docker/gitlab/data:/var/opt/gitlab \
-d \
gitlab/gitlab-ce:latest
gitlab.rb:
external_url 'https://git.mydomain.com'
ci_nginx['enable'] = false
nginx['listen_port'] = 80
nginx['listen_https'] = false
gitlab_rails['gitlab_shell_ssh_port'] = 766
Nginx config:
upstream gitlab {
server 172.18.0.2;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name git.mydomain.com;
ssl on;
ssl_certificate /etc/ssl/certs/mydomain.com.chained.pem;
ssl_certificate_key /etc/ssl/private/my.key;
location / {
proxy_pass http://gitlab/;
proxy_read_timeout 10;
}
}
I tried to replace the wrong url wit nginx. This worked for the appearance like in the screen shot, but not for the links:
sub_filter 'https://gitlab/' 'https://$host/';
sub_filter_once off;
You've set your url correctly in your mapped volume /etc/gitlab/gitlab.rb
external_url 'https://git.mydomain.com'
Run sudo gitlab-ctl reconfigure for the change to take effect.
Problem solved. It was in the nginx reverse proxy config. I named the upstream "gitlab" and somehow this name found it's way into the web pages:
upstream gitlab {
server 172.18.0.2;
}
...
proxy_pass http://gitlab/;
I didn't expect that. I even omitted the upstream part in my original question because I thought the upstream name is just an internal name and will be replaced by the defined ip.
So the fix for me is, to write the full domain into the upstream:
upstream git.mydomain.com{
server 172.18.0.2;
}
...
proxy_pass http://git.mydomain.com/;

Resources